Science.gov

Sample records for taguchi experimental design

  1. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  2. Experimental design for improved ceramic processing, emphasizing the Taguchi Method

    SciTech Connect

    Weiser, M.W. . Mechanical Engineering Dept.); Fong, K.B. )

    1993-12-01

    Ceramic processing often requires substantial experimentation to produce acceptable product quality and performance. This is a consequence of ceramic processes depending upon a multitude of factors, some of which can be controlled and others that are beyond the control of the manufacturer. Statistical design of experiments is a procedure that allows quick, economical, and accurate evaluation of processes and products that depend upon several variables. Designed experiments are sets of tests in which the variables are adjusted methodically. A well-designed experiment yields unambiguous results at minimal cost. A poorly designed experiment may reveal little information of value even with complex analysis, wasting valuable time and resources. This article will review the most common experimental designs. This will include both nonstatistical designs and the much more powerful statistical experimental designs. The Taguchi Method developed by Grenichi Taguchi will be discussed in some detail. The Taguchi method, based upon fractional factorial experiments, is a powerful tool for optimizing product and process performance.

  3. Spacecraft design optimization using Taguchi analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1991-01-01

    The quality engineering methods of Dr. Genichi Taguchi, employing design of experiments, are important statistical tools for designing high quality systems at reduced cost. The Taguchi method was utilized to study several simultaneous parameter level variations of a lunar aerobrake structure to arrive at the lightest weight configuration. Finite element analysis was used to analyze the unique experimental aerobrake configurations selected by Taguchi method. Important design parameters affecting weight and global buckling were identified and the lowest weight design configuration was selected.

  4. Statistical analysis of sonochemical synthesis of SAPO-34 nanocrystals using Taguchi experimental design

    SciTech Connect

    Askari, Sima; Halladj, Rouein; Nazari, Mahdi

    2013-05-15

    Highlights: ► Sonochemical synthesis of SAPO-34 nanocrystals. ► Using Taguchi experimental design (L9) for optimizing the experimental procedure. ► The significant effects of all the ultrasonic parameters on the response. - Abstract: SAPO-34 nanocrystals with high crystallinity were synthesized by means of sonochemical method. An L9 orthogonal array of the Taguchi method was implemented to investigate the effects of sonication conditions on the preparation of SAPO-34 with respect to crystallinity of the final product phase. The experimental data establish the favorable phase crystallinity which is improved by increasing the ultrasonic power and the sonication temperature. In the case of ultrasonic irradiation time, however, an initial increases in crystallinity from 5 min to 15 min is followed by a decrease in crystallinity for longer sonication time.

  5. Parametric analysis of lithium oxyhalide spirally wound cells utilizing the Taguchi approach to experimental design

    SciTech Connect

    Takeuchi, E.S.; Size, P.J.

    1994-12-31

    The Taguchi Method of Experimental Design was utilized to parametrically assess the effects of four variables in cell configuration on performance of spirally wound lithium oxyhalide D cells. This approach utilizes fractional factorial designs requiring a fraction of the number of experiments required of full factorial experiments. The Taguchi approach utilizes ANOVA analysis for calculating the percent contribution of each factor to battery performance as well as main effects of each factor. The four factors investigated in this study were the electrolyte type, the electrolyte concentration, the depolarizer type, and the mechanical cell design. The effects of these four factors on 1A constant current discharge, low temperature discharge, start-up, and shelf-life were evaluated. The factor having the most significant effect on cell performance was the electrolyte type.

  6. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λem ) of 358 nm after excitation (λex ) at 301 nm. The Taguchi orthogonal array L9 (3(4) ) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26456088

  7. Molecular assay optimized by Taguchi experimental design method for venous thromboembolism investigation.

    PubMed

    Celani de Souza, Helder Jose; Moyses, Cinthia B; Pontes, Fabrício J; Duarte, Roberto N; Sanches da Silva, Carlos Eduardo; Alberto, Fernando Lopes; Ferreira, Ubirajara R; Silva, Messias Borges

    2011-01-01

    Two mutations - Factor V Leiden (1691G > A) and the 20210G > A on the Prothrombin gene - are key risk factors for a frequent and potentially fatal disorder called Venous Thromboembolism. These molecular alterations can be investigated using real-time Polymerase Chain Reaction (PCR) with Fluorescence Resonance Energy Transfer (FRET) probes and distinct DNA pools for both factors. The objective of this paper is to present an application of Taguchi Experimental Design Method to determine the best parameters adjustment of a Molecular Assays Process in order to obtain the best diagnostic result for Venous Thromboembolism investigation. The complete process contains six three-level factors which usually demands 729 experiments to obtain the final result, if using a Full Factorial Array. In this research, a Taguchi L27 Orthogonal Array is chosen to optimize the analysis and reduce the number of experiments to 27 without degrading the final result accuracy. The application of this method can lessen the time and cost necessary to achieve the best operation condition for a required performance. The results is proven in practice and confirmed that the Taguchi method can really offer a good approach for clinical assay efficiency and effectiveness improvement even though the clinical diagnostics can be based on the use of qualitative techniques. PMID:21867748

  8. Microcosm assays and Taguchi experimental design for treatment of oil sludge containing high concentration of hydrocarbons.

    PubMed

    Castorena-Cortés, G; Roldán-Carrillo, T; Zapata-Peñasco, I; Reyes-Avila, J; Quej-Aké, L; Marín-Cruz, J; Olguín-Lora, P

    2009-12-01

    Microcosm assays and Taguchi experimental design was used to assess the biodegradation of an oil sludge produced by a gas processing unit. The study showed that the biodegradation of the sludge sample is feasible despite the high level of pollutants and complexity involved in the sludge. The physicochemical and microbiological characterization of the sludge revealed a high concentration of hydrocarbons (334,766+/-7001 mg kg(-1) dry matter, d.m.) containing a variety of compounds between 6 and 73 carbon atoms in their structure, whereas the concentration of Fe was 60,000 mg kg(-1) d.m. and 26,800 mg kg(-1) d.m. of sulfide. A Taguchi L(9) experimental design comprising 4 variables and 3 levels moisture, nitrogen source, surfactant concentration and oxidant agent was performed, proving that moisture and nitrogen source are the major variables that affect CO(2) production and total petroleum hydrocarbons (TPH) degradation. The best experimental treatment yielded a TPH removal of 56,092 mg kg(-1) d.m. The treatment was carried out under the following conditions: 70% moisture, no oxidant agent, 0.5% of surfactant and NH(4)Cl as nitrogen source. PMID:19635663

  9. A Taguchi experimental design study of twin-wire electric arc sprayed aluminum coatings

    SciTech Connect

    Steeper, T.J.; Varacalle, D.J. Jr.; Wilson, G.C.; Johnson, R.W.; Irons, G.; Kratochvil, W.R.; Riggs, W.L. II

    1992-08-01

    An experimental study was conducted on the twin-wire electric arc spraying of aluminum coatings. This aluminum wire system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic experiments. Experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical process parameters in a systematic design of experiments in order to display the range of processing conditions and their effect on the resultant coating. The coatings were characterized by hardness tests, optical metallography, and image analysis. The paper discusses coating qualities with respect to hardness, roughness, deposition efficiency, and microstructure. The study attempts to correlate the features of the coatings with the changes in operating parameters. A numerical model of the process is presented including gas, droplet, and coating dynamics.

  10. A Taguchi experimental design study of twin-wire electric arc sprayed aluminum coatings

    SciTech Connect

    Steeper, T.J. ); Varacalle, D.J. Jr.; Wilson, G.C.; Johnson, R.W. ); Irons, G.; Kratochvil, W.R. ); Riggs, W.L. II )

    1992-01-01

    An experimental study was conducted on the twin-wire electric arc spraying of aluminum coatings. This aluminum wire system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic experiments. Experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical process parameters in a systematic design of experiments in order to display the range of processing conditions and their effect on the resultant coating. The coatings were characterized by hardness tests, optical metallography, and image analysis. The paper discusses coating qualities with respect to hardness, roughness, deposition efficiency, and microstructure. The study attempts to correlate the features of the coatings with the changes in operating parameters. A numerical model of the process is presented including gas, droplet, and coating dynamics.

  11. Parametric Appraisal of Process Parameters for Adhesion of Plasma Sprayed Nanostructured YSZ Coatings Using Taguchi Experimental Design

    PubMed Central

    Mantry, Sisir; Mishra, Barada K.; Chakraborty, Madhusudan

    2013-01-01

    This paper presents the application of the Taguchi experimental design in developing nanostructured yittria stabilized zirconia (YSZ) coatings by plasma spraying process. This paper depicts dependence of adhesion strength of as-sprayed nanostructured YSZ coatings on various process parameters, and effect of those process parameters on performance output has been studied using Taguchi's L16 orthogonal array design. Particle velocities prior to impacting the substrate, stand-off-distance, and particle temperature are found to be the most significant parameter affecting the bond strength. To achieve retention of nanostructure, molten state of nanoagglomerates (temperature and velocity) has been monitored using particle diagnostics tool. Maximum adhesion strength of 40.56 MPa has been experimentally found out by selecting optimum levels of selected factors. The enhanced bond strength of nano-YSZ coating may be attributed to higher interfacial toughness due to cracks being interrupted by adherent nanozones. PMID:24288490

  12. Optimization of Wear Behavior of Magnesium Alloy AZ91 Hybrid Composites Using Taguchi Experimental Design

    NASA Astrophysics Data System (ADS)

    Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj

    2016-06-01

    In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.

  13. Neutralization of red mud with pickling waste liquor using Taguchi's design of experimental methodology.

    PubMed

    Rai, Suchita; Wasewar, Kailas L; Lataye, Dilip H; Mishra, Rajshekhar S; Puttewar, Suresh P; Chaddha, Mukesh J; Mahindiran, P; Mukhopadhyay, Jyoti

    2012-09-01

    'Red mud' or 'bauxite residue', a waste generated from alumina refinery is highly alkaline in nature with a pH of 10.5-12.5. Red mud poses serious environmental problems such as alkali seepage in ground water and alkaline dust generation. One of the options to make red mud less hazardous and environmentally benign is its neutralization with acid or an acidic waste. Hence, in the present study, neutralization of alkaline red mud was carried out using a highly acidic waste (pickling waste liquor). Pickling waste liquor is a mixture of strong acids used for descaling or cleaning the surfaces in steel making industry. The aim of the study was to look into the feasibility of neutralization process of the two wastes using Taguchi's design of experimental methodology. This would make both the wastes less hazardous and safe for disposal. The effect of slurry solids, volume of pickling liquor, stirring time and temperature on the neutralization process were investigated. The analysis of variance (ANOVA) shows that the volume of the pickling liquor is the most significant parameter followed by quantity of red mud with 69.18% and 18.48% contribution each respectively. Under the optimized parameters, pH value of 7 can be achieved by mixing the two wastes. About 25-30% of the total soda from the red mud is being neutralized and alkalinity is getting reduced by 80-85%. Mineralogy and morphology of the neutralized red mud have also been studied. The data presented will be useful in view of environmental concern of red mud disposal. PMID:22751850

  14. Optimization of α-amylase production by Bacillus subtilis RSKK96: using the Taguchi experimental design approach.

    PubMed

    Uysal, Ersin; Akcan, Nurullah; Baysal, Zübeyde; Uyar, Fikret

    2011-01-01

    In this study, the Taguchi experimental design was applied to optimize the conditions for α-amylase production by Bacillus subtilis RSKK96, which was purchased from Refik Saydam Hifzissihha Industry (RSHM). Four factors, namely, carbon source, nitrogen source, amino acid, and fermentation time, each at four levels, were selected, and an orthogonal array layout of L(16) (4(5)) was performed. The model equation obtained was validated experimentally at maximum casein (1%), corn meal (1%), and glutamic acid (0.01%) concentrations with incubation time to 72 h in the presence of 1% inoculum density. Point prediction of the design showed that maximum α-amylase production of 503.26 U/mg was achieved under optimal experimental conditions. PMID:21229466

  15. Vertically aligned N-doped CNTs growth using Taguchi experimental design

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo M.; Fernandes, António J. S.; Ferro, Marta C.; Pinna, Nicola; Silva, Rui F.

    2015-07-01

    The Taguchi method with a parameter design L9 orthogonal array was implemented for optimizing the nitrogen incorporation in the structure of vertically aligned N-doped CNTs grown by thermal chemical deposition (TCVD). The maximization of the ID/IG ratio of the Raman spectra was selected as the target value. As a result, the optimal deposition configuration was NH3 = 90 sccm, growth temperature = 825 °C and catalyst pretreatment time of 2 min, the first parameter having the main effect on nitrogen incorporation. A confirmation experiment with these values was performed, ratifying the predicted ID/IG ratio of 1.42. Scanning electron microscopy (SEM) characterization revealed a uniform completely vertically aligned array of multiwalled CNTs which individually exhibit a bamboo-like structure, consisting of periodically curved graphitic layers, as depicted by high resolution transmission electron microscopy (HRTEM). The X-ray photoelectron spectroscopy (XPS) results indicated a 2.00 at.% of N incorporation in the CNTs in pyridine-like and graphite-like, as the predominant species.

  16. Metal recovery enhancement using Taguchi style experimentation

    SciTech Connect

    Wells, P.A.; Andreas, R.E.; Fox, T.M.

    1995-12-31

    In the remelting of scrap, the ultimate goal is to produce clean aluminum while minimizing metal losses. Recently, it has become more difficult to make significant recovery improvements in Reynolds` Reclamation Plants since metal recoveries were nearing the theoretical maximum. In an effort to gain a better understanding of the factors impacting Reynolds remelting process, a series of experiments using a Taguchi-type design was performed. Specifically, the critical variables and interactions affecting metal recovery of shredded, delacquered Used Beverage Containers (UBC) melted in a side-well reverbatory furnace were examined. This furnace was equipped with plunger-style puddlers and metal circulation. Both delacquering and melting processes operated continuously with downtime only for necessary mechanical repairs. The experimental design consisted of an orthogonal array with eight trials, each using nominal 500,000 lb shred charge volumes. Final recovery results included molten output and metal easily recovered from dross generated during the test.

  17. Optimization of experimental parameters based on the Taguchi robust design for the formation of zinc oxide nanocrystals by solvothermal method

    SciTech Connect

    Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong

    2011-05-15

    Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.

  18. Hydrothermal processing of hydroxyapatite nanoparticles—A Taguchi experimental design approach

    NASA Astrophysics Data System (ADS)

    Sadat-Shojai, Mehdi; Khorasani, Mohammad-Taghi; Jamshidi, Ahmad

    2012-12-01

    Chemical precipitation followed by hydrothermal processing is conventionally employed in the laboratory-scale synthesis of hydroxyapatite (HAp) and extensive information on its processing conditions has therefore been provided in literature. However, the knowledge about the influence of some operating parameters, especially those important for a large-scale production, is yet insufficient. A specific approach based on a Taguchi orthogonal array was therefore used to evaluate these parameters and to optimize them for a more effective synthesis. This approach allowed us to systematically determine the correlation between the operating factors and the powder quality. Analysis of signal-to-noise ratios revealed the great influence of temperature and pH on the characteristic of powder. Additionally, the injection rate of one reagent into another was found to be the most important operating factor affecting the stoichiometric ratio of powders. As-prepared powders were also studied for their in-vitro bioactivity. The SEM images showed the accumulation of a new apatite-like phase on surface of the powder along with an interesting morphological change after a 45-day incubation of powder in SBF, indicating a promising bioactivity. Some results also showed the capability of simple hydrothermal method for the synthesis of a lamellar structure without the help of any templating system.

  19. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  20. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    PubMed Central

    Asghar, Anam; Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    In the present study, a comparison of central composite design (CCD) and Taguchi method was established for Fenton oxidation. [Dye]ini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications. PMID:25258741

  1. Robust PID Parameter Design for Embedded Temperature Control System Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Suzuki, Arata; Sugimoto, Kenji

    This paper proposes a robust PID parameter design scheme using Taguchi's robust design method. This scheme is applied to an embedded PID temperature control system which is affected by outside (room) temperature. The effectiveness of this scheme is verified experimentally with a cooking household appliance.

  2. Application of Taguchi Philosophy for Optimization of Design Parameters in a Rectangular Enclosure with Triangular Fin Array

    NASA Astrophysics Data System (ADS)

    Dwivedi, Ankur; Das, Debasish

    2015-10-01

    In this study, an optimum parametric design yielding maximum heat transfer has been suggested using Taguchi Philosophy. This statistical approach has been applied to the results of an experimental parametric study conducted to investigate the influence of fin height ( L); fin spacing ( S) and Rayleigh number ( Ra) on convection heat transfer from triangular fin array within a vertically oriented rectangular enclosure. Taguchi's L9 (3**3) orthogonal array design has been adopted for three different levels of influencing parameters. The goal of this study is to reach maximum heat transfer (i.e. Nusselt number). The dependence of optimum fin spacing on fin height has been also reported. The results proved the suitability of the application of Taguchi design approach in this kind of study, and the predictions by the method are reported in very good agreement with experimental results. This paper also compares the application of classical design approach with Taguchi's methodology used for determination of optimum parametric design

  3. A Taguchi study of the aeroelastic tailoring design process

    NASA Technical Reports Server (NTRS)

    Bohlmann, Jonathan D.; Scott, Robert C.

    1991-01-01

    A Taguchi study was performed to determine the important players in the aeroelastic tailoring design process and to find the best composition of the optimization's objective function. The Wing Aeroelastic Synthesis Procedure (TSO) was used to ascertain the effects that factors such as composite laminate constraints, roll effectiveness constraints, and built-in wing twist and camber have on the optimum, aeroelastically tailored wing skin design. The results show the Taguchi method to be a viable engineering tool for computational inquiries, and provide some valuable lessons about the practice of aeroelastic tailoring.

  4. A robust approach to human-computer interface design using the Taguchi method

    SciTech Connect

    Reed, B.M.

    1991-01-01

    The application of Dr. Genichi Taguchi's approach for design optimization, called Robust Design, to the design of human-computer interface software is investigated. The taguchi method is used to select a near optimum set of interface design alternatives to improve user acceptance of the resulting interface software product with minimum sensitivity to uncontrollable noise caused by human behavioral characteristics. Design alternatives for interaction with personal micro-computers are identified. Several important and representative alternatives are chosen as design parameters for the Taguchi matrix experiment. A noise field with three human behavioral characteristics as noise factors were chosen as a representative noise array. Task accomplishment scenarios were developed for demonstration of the design parameters on an interactive human-computer interface. Experimentation was conducted using selected human subjects to study the effect of the various settings of the design parameters on user acceptance of the interface. Using the results of the matrix experiment, a near optimum set of design parameter values was selected.

  5. Using Taguchi robust design method to develop an optimized synthesis procedure of nanocrystalline cancrinite

    NASA Astrophysics Data System (ADS)

    Azizi, Seyed Naser; Asemi, Neda; Samadi-Maybodi, Abdolrouf

    2012-09-01

    In this study, perlite was used as a low-cost source of Si and Al to synthesis of nanocrystalline cancrinite zeolite. The synthesis of cancrinite zeolite from perlite by using the alkaline hydrothermal treatment under saturated steam pressure was investigated. A statistical Taguchi design of experiments was employed to evaluate the effects of the process variables such as type of aging, aging time and hydrothermal crystallization time on the crystallnity of synthesized zeolite. The optimum conditions for maximum crystallinity of nanocrystalline cancrinite were obtained as microwave-assisted aging, 60 min aging time and 6 h hydrothermal crystallization time from statistical analysis of the experimental results using Taguchi design. The synthetic samples were characterization by XRD, FT-IR and FE-SEM techniques. The results showed that the microwave-assisted aging can shorten the crystallization time and reduced the crystal size to form nanocrystalline cancrinite zeolite.

  6. Fabrication and optimization of camptothecin loaded Eudragit S 100 nanoparticles by Taguchi L4 orthogonal array design

    PubMed Central

    Mahalingam, Manikandan; Krishnamoorthy, Kannan

    2015-01-01

    Introduction: The objective of this investigation was to design and optimize the experimental conditions for the fabrication of camptothecin (CPT) loaded Eudragit S 100. Nanoparticles, and to understand the effect of various process parameters on the average particles size, particle size uniformity and surface area of the prepared polymeric nanoparticles using Taguchi design. Materials and Methods: CPT loaded Eudragit S 100 nanoparticles were prepared by nanoprecipitation method and characterized by particles size analyzer. Taguchi orthogonal array design was implemented to study the influence of seven independent variables on three dependent variables. Eight experimental trials involving seven independent variables at higher and lower levels were generated by design expert. Results: Factorial design result has shown that (a) except, β-cyclodextrin concentration all other parameters do not significantly influenced the average particle size (R1); (b) except, sonication duration and aqueous phase volume, all other process parameters significantly influence the particle size uniformity; (c) all the process parameters does not significantly influence the surface area. Conclusion: The R1, particle size uniformity and surface area of the prepared drug-loaded polymeric nanoparticles were found to be 120 nm, 0.237 and 55.7 m2 /g and the results were good correlated with the data generated by the Taguchi design method. PMID:26258056

  7. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  8. Optimized CVD production of CNT-based nanohybrids by Taguchi robust design.

    PubMed

    Santangelo, S; Lanza, M; Piperopoulos, E; Milone, C

    2012-03-01

    Taguchi's robust design method is for the first time employed to optimize many aspects of the production of nanohybrids based on C nanotubes by iron-catalyzed chemical vapor deposition in i-C4H10 + H2 atmosphere. By analyzing the outcomes of the catalytic process in terms of selectivity, carbon yield, purity and crystalline arrangement of the hybrid-forming nanotubes, the influence is ranked of the following parameters: synthesis temperature (500-700 degrees C), support material (alumina, magnesia or sodium-exchanged montmorillonite), calcination- (450-750 degrees C) and reduction-(500-700 degrees C) temperatures of the 15 wt% Fe-catalyst. In the experiments initially performed for this purpose, the growth process had, on average, scarce selectivity (2 in a scale 1-5) and poor yield (130 wt%); carbonaceous deposits exhibited unsatisfactory graphitization degree (Raman D/G intensity ratio > 1.5) and contained large amounts of metal impurities (14 wt%) and amorphous carbon (5 wt%). The indications emerging from Taguchi approach to the process optimization are critically examined. The experimental conditions chosen for carrying out test experiments allow achieving excellent selectivity (5) or large yield (760 wt%), hybrids with well-graphitized nanotubes (D/G intensity ratio < 0.6), nearly free of metallic (0.3 wt%) or amorphous (0.4 wt%) inclusions, with consequent possibility of satisfying the different requisites that the specific application to be addressed may require. PMID:22755069

  9. Formulation Development and Evaluation of Hybrid Nanocarrier for Cancer Therapy: Taguchi Orthogonal Array Based Design

    PubMed Central

    Tekade, Rakesh K.; Chougule, Mahavir B.

    2013-01-01

    Taguchi orthogonal array design is a statistical approach that helps to overcome limitations associated with time consuming full factorial experimental design. In this study, the Taguchi orthogonal array design was applied to establish the optimum conditions for bovine serum albumin (BSA) nanocarrier (ANC) preparation. Taguchi method with L9 type of robust orthogonal array design was adopted to optimize the experimental conditions. Three key dependent factors namely, BSA concentration (% w/v), volume of BSA solution to total ethanol ratio (v : v), and concentration of diluted ethanolic aqueous solution (% v/v), were studied at three levels 3%, 4%, and 5% w/v; 1 : 0.75, 1 : 0.90, and 1 : 1.05 v/v; 40%, 70%, and 100% v/v, respectively. The ethanolic aqueous solution was used to impart less harsh condition for desolvation and attain controlled nanoparticle formation. The interaction plot studies inferred the ethanolic aqueous solution concentration to be the most influential parameter that affects the particle size of nanoformulation. This method (BSA, 4% w/v; volume of BSA solution to total ethanol ratio, 1 : 0.90 v/v; concentration of diluted ethanolic solution, 70% v/v) was able to successfully develop Gemcitabine (G) loaded modified albumin nanocarrier (M-ANC-G) of size 25.07 ± 2.81 nm (ζ = −23.03 ± 1.015 mV) as against to 78.01 ± 4.99 nm (ζ = −24.88 ± 1.37 mV) using conventional method albumin nanocarrier (C-ANC-G). Hybrid nanocarriers were generated by chitosan layering (solvent gelation technique) of respective ANC to form C-HNC-G and M-HNC-G of sizes 125.29 ± 5.62 nm (ζ = 12.01 ± 0.51 mV) and 46.28 ± 2.21 nm (ζ = 15.05 ± 0.39 mV), respectively. Zeta potential, entrapment, in vitro release, and pH-based stability studies were investigated and influence of formulation parameters are discussed. Cell-line-based cytotoxicity assay (A549 and H460 cells) and cell internalization assay (H460 cell line) were

  10. Taguchi statistical design and analysis of cleaning methods for spacecraft materials

    NASA Technical Reports Server (NTRS)

    Lin, Y.; Chung, S.; Kazarians, G. A.; Blosiu, J. O.; Beaudet, R. A.; Quigley, M. S.; Kern, R. G.

    2003-01-01

    In this study, we have extensively tested various cleaning protocols. The variant parameters included the type and concentration of solvent, type of wipe, pretreatment conditions, and various rinsing systems. Taguchi statistical method was used to design and evaluate various cleaning conditions on ten common spacecraft materials.

  11. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method

    PubMed Central

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-01-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134–57.500 gr ethanol kg−1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594

  12. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method.

    PubMed

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-01-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134-57.500 gr ethanol kg(-1) Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594

  13. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  14. Optimization of ultrasound assisted extraction of anthocyanins from red cabbage using Taguchi design method.

    PubMed

    Ravanfar, Raheleh; Tamadon, Ali Mohammad; Niakousari, Mehrdad

    2015-12-01

    There is a growing demand for developing suitable and more efficient extraction of active compounds from the plants and ultrasound is one of these novel methodologies. Moreover, the experimental set up to reach an appropriate condition for an optimum yield is demanding and time consuming. In the present study, Taguchi L9 orthogonal design was applied to optimize the process parameters (output power, time, temperature and pulse mode) for ultrasound assisted extraction of anthocyanins from red cabbage and the concluding yield of anthocyanin was measured by pH differential method. The statistical analysis revealed that the most important factors contributing to the extraction efficiency were time, temperature and power, respectively and the optimum condition was at 30 min, 15 °C and 100 W which could result the maximum anthocyanin yield of about 20.9 mg/L. The theoretical result was confirmed experimentally by carrying out the trials at the optimum condition and evaluating the actual yield. PMID:26604387

  15. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Astrophysics Data System (ADS)

    Olds, John R.; Walberg, Gerald D.

    1993-02-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  16. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  17. Application of Taguchi Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4

    PubMed Central

    Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir

    2013-01-01

    For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions. PMID:24250648

  18. Mixed matrix membrane application for olive oil wastewater treatment: process optimization based on Taguchi design method.

    PubMed

    Zirehpour, Alireza; Rahimpour, Ahmad; Jahanshahi, Mohsen; Peyravi, Majid

    2014-01-01

    Olive oil mill wastewater (OMW) is a concentrated effluent with a high organic load. It has high levels of organic chemical oxygen demand (COD) and phenolic compounds. This study presents a unique process to treat OMW. The process uses ultrafiltration (UF) membranes modified by a functionalized multi wall carbon nano-tube (F-MWCNT). The modified tube has an inner diameter of 15-30 nm and is added to the OMW treatment process to improve performance of the membrane. Tests were done to evaluate the following operating parameters of the UF system; pressure, pH and temperature; also evaluated parameters of permeate flux, flux decline, COD removal and total phenol rejection. The Taguchi robust design method was applied for an optimization evaluation of the experiments. Variance (ANOVA) analysis was used to determine the most significant parameters affecting permeate flux, flux decline, COD removal and total phenols rejection. Results demonstrated coagulation and pH as the most important factors affecting permeate flux of the UF. Moreover, pH and F-MWCNT UF had significant positive effects on flux decline, COD removal and total phenols rejection. Based on the optimum conditions determined by the Taguchi method, evaluations for permeate flux tests; flux decline, COD removal and total phenols rejection were about 21.2 (kg/m(2) h), 12.6%, 72.6% and 89.5%, respectively. These results were in good agreement with those predicted by the Taguchi method (i.e.; 22.8 (kg/m(2) h), 11.9%, 75.8 and 94.7%, respectively). Mechanical performance of the membrane and its application for high organic wastewater treatment were determined as strong. PMID:24291584

  19. Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.

    1995-01-01

    Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.

  20. Workspace design for crane cabins applying a combined traditional approach and the Taguchi method for design of experiments.

    PubMed

    Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana

    2016-01-01

    Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and Taguchi approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention. PMID:26652099

  1. Using Taguchi method to design LED lamp for zonal lumen density requirement of ENERGY STAR

    NASA Astrophysics Data System (ADS)

    Yu, Jen-Lung; Chen, Yi-Yung; Whang, Allen Jong-Woei; Ma, Chi-Tang

    2011-10-01

    In recent trend, LED begins to replace traditional light sources since it has many advantages, such as long lifespan, low power consumption, environmentally mercury-free, broad color gamut, and so on. According to the zonal lumen density requirement of ENERGY STAR, we design a triangular-prism structure for LED light tube. The optical structure of the current LED light tubes consists of the array of LED and the semi-cylindrical diffuser in which the intensity distribution of LED is based on Lambertian and the characteristics of diffuser are BTDF: 63%, transmission: 27%, and absorption: 10%. We design the triangular-prism structure at the both sides of the semi-circular diffuser to control the wide-angle light and use the Taguchi method to optimize the parameters of the structure that will control the 10.41% of total flux to light the area between 90 degree and 135 degree and to avoid the total internal reflection. According to the optical simulation results, the 89.59% of total flux is within 90 degree and the 10.41% of total flux is between 90 degree and 135 degree that match with the Solid-State Lighting (SSL) Criteria V. 1.1 of ENERGY STAR.

  2. Mathematical modeling and analysis of EDM process parameters based on Taguchi design of experiments

    NASA Astrophysics Data System (ADS)

    Laxman, J.; Raj, K. Guru

    2015-12-01

    Electro Discharge Machining is a process used for machining very hard metals, deep and complex shapes by metal erosion in all types of electro conductive materials. The metal is removed through the action of an electric discharge of short duration and high current density between the tool and the work piece. The eroded metal on the surface of both work piece and the tool is flushed away by the dielectric fluid. The objective of this work is to develop a mathematical model for an Electro Discharge Machining process which provides the necessary equations to predict the metal removal rate, electrode wear rate and surface roughness. Regression analysis is used to investigate the relationship between various process parameters. The input parameters are taken as peak current, pulse on time, pulse off time, tool lift time. and the Metal removal rate, electrode wear rate and surface roughness are as responses. Experiments are conducted on Titanium super alloy based on the Taguchi design of experiments i.e. L27 orthogonal experiments.

  3. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  4. An Exploratory Exercise in Taguchi Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System

    NASA Technical Reports Server (NTRS)

    Deal, Don E.

    1991-01-01

    The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of Taguchi analysis as possible in the time allotted, and, secondly, to apply the methodology to a design problem with the intention of establishing a preliminary set of near-optimal (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or optimize certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the Taguchi statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the Taguchi experiment array data suggested acceptable or close agreement between the two except at the predicted optimum

  5. Use of Taguchi Design of Experiments to Determine ALPLS Ascent Delta-5 Sensitivities and Total Mass Sensitivities to Release Conditions and Vehicle Parameters

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector Ramon

    1991-01-01

    The objective of this study is to evaluate the use of Taguchi's Design of Experiment Methods to improve the effectiveness of this and future parametric studies. Taguchi Methods will be applied in addition to the typical approach to provide a mechanism for comparing the results and the cost or effort necessary to complete the studies. It is anticipated that results of this study should include an improved systematic analysis process, an increase in information obtained at a lower cost, and a more robust, cost effective vehicle design.

  6. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector R.

    1992-01-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  7. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Astrophysics Data System (ADS)

    Carrasco, Hector R.

    1992-12-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  8. Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de Taguchi

    NASA Astrophysics Data System (ADS)

    Meseguer Valdenebro, Jose Luis

    Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical methods most commonly used today are three: finite difference method, finite element method and finite volume method. The most widely used numerical method for the modeling of welded joints is the finite element method because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element method as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by applying the method of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an optimization has been conducted by means of the Taguchi method for welding parameters in order to obtain an

  9. Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de Taguchi

    NASA Astrophysics Data System (ADS)

    Meseguer Valdenebro, Jose Luis

    Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical methods most commonly used today are three: finite difference method, finite element method and finite volume method. The most widely used numerical method for the modeling of welded joints is the finite element method because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element method as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by applying the method of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an optimization has been conducted by means of the Taguchi method for welding parameters in order to obtain an

  10. Taguchi optimisation of ELISA procedures.

    PubMed

    Jeney, C; Dobay, O; Lengyel, A; Adám, E; Nász, I

    1999-03-01

    We propose a new method in the field of ELISA optimization using an experimental design called the Taguchi method. This can be used to compare the net effects of different conditions which can be both qualitative and quantitative in nature. The method reduces the effects of the interactions of the optimized variables making it possible to access the optimum conditions even in cases where there are large interactions between the variables of the assay. Furthermore, the proposed special assignment of factors makes it possible to calculate the biochemical parameters of the ELISA procedure carried out under optimum conditions. Thus, the calibration curve, the sensitivity of the optimum assay, the intra-assay and inter-assay variability can be estimated. The method is fast, accessing the results in one step, compared to the traditional, time-consuming 'one-step-at-a-time' method. We exemplify the procedure with a method to optimize the detection of ScFv (single chain fragment of variable) phages by ELISA. All the necessary calculations can be carried out by a spreadsheet program without any special statistical knowledge. PMID:10089092

  11. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design

    PubMed Central

    Emami, J.; Mohiti, H.; Hamishehkar, H.; Varshosaz, J.

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7® software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the

  12. An application of robust parameter design using an alternative to Taguchi methods

    SciTech Connect

    Abate, M.L.; Morrow, M.C.; Kuczek, T.

    1996-11-01

    The factors of interest in designing a product or process can generally be classified into two categories, controllable and uncontrollable. Controllable (or control) factors represent those factors which can be regulated. Examples of control factors include: the choice of material, flow rates, processing pressures, times and temperatures. Uncontrollable (noise) factors are those that are either difficult, impossible or too expensive to control during actual production or use. Examples of noise factors are: environmental conditions such as ambient temperature or humidity, process parameters which are dictated by an outside source such as end user demand, and usage factors such as how long and at what temperature a consumer stores a product. As compared to the current Tagachi approach, a new design method which provides greater flexibility in the design of the experiment, utilize a more meaningful performance statistic, and lend itself to a better understanding of the product or process is described in this paper.

  13. Workbook for Taguchi Methods for Product Quality Improvement.

    ERIC Educational Resources Information Center

    Zarghami, Ali; Benbow, Don

    Taguchi methods are methods of product quality improvement that analyze major contributions and how they can be controlled to reduce variability of poor performance. In this approach, knowledge is used to shorten testing. Taguchi methods are concerned with process improvement rather than with process measurement. This manual is designed to be used…

  14. An Experimental Investigation into the Optimal Processing Conditions for the CO2 Laser Cladding of 20 MnCr5 Steel Using Taguchi Method and ANN

    NASA Astrophysics Data System (ADS)

    Mondal, Subrata; Bandyopadhyay, Asish.; Pal, Pradip Kumar

    2010-10-01

    This paper presents the prediction and evaluation of laser clad profile formed by means of CO2 laser applying Taguchi method and the artificial neural network (ANN). Laser cladding is one of the surface modifying technologies in which the desired surface characteristics of any component can be achieved such as good corrosion resistance, wear resistance and hardness etc. Laser is used as a heat source to melt the anti-corrosive powder of Inconel-625 (Super Alloy) to give a coating on 20 MnCr5 substrate. The parametric study of this technique is also attempted here. The data obtained from experiments have been used to develop the linear regression equation and then to develop the neural network model. Moreover, the data obtained from regression equations have also been used as supporting data to train the neural network. The artificial neural network (ANN) is used to establish the relationship between the input/output parameters of the process. The established ANN model is then indirectly integrated with the optimization technique. It has been seen that the developed neural network model shows a good degree of approximation with experimental data. In order to obtain the combination of process parameters such as laser power, scan speed and powder feed rate for which the output parameters become optimum, the experimental data have been used to develop the response surfaces.

  15. Taguchi methods in electronics: A case study

    NASA Technical Reports Server (NTRS)

    Kissel, R.

    1992-01-01

    Total Quality Management (TQM) is becoming more important as a way to improve productivity. One of the technical aspects of TQM is a system called the Taguchi method. This is an optimization method that, with a few precautions, can reduce test effort by an order of magnitude over conventional techniques. The Taguchi method is specifically designed to minimize a product's sensitivity to uncontrollable system disturbances such as aging, temperature, voltage variations, etc., by simultaneously varying both design and disturbance parameters. The analysis produces an optimum set of design parameters. A 3-day class on the Taguchi method was held at the Marshall Space Flight Center (MSFC) in May 1991. A project was needed as a follow-up after the class was over, and the motor controller was selected at that time. Exactly how to proceed was the subject of discussion for some months. It was not clear exactly what to measure, and design kept getting mixed with optimization. There was even some discussion about why the Taguchi method should be used at all.

  16. The use of experimental designs for corrosive oilfield systems

    SciTech Connect

    Biagiotti, S.F. Jr.; Frost, R.H.

    1997-08-01

    A Design of Experiment approach was used to investigate the effect of hydrogen sulfide, carbon dioxide and brine composition on the corrosion rate of carbon steel. Three of the most common experimental design approaches (Full Factorial, Taguchi L{sub 4}, and Alternate Fractional) were used to evaluate the results. This work concluded that: CO{sub 2} and brine both have significant main and two-factor effects on corrosion rate, H{sub 2}S concentration has a moderate effect on corrosion rate, and higher total dissolved solids (TDS) brine compositions appear to force gases out of solution, thereby decreasing the corrosion rate of carbon steel. The Full Factorial Design correctly identified all independent variables and the significant interactions between CO{sub 2}/H{sub 2}S and CO{sub 2}/Brine on corrosion rate. The two fractional factorial experimental methods resulted in incorrect conclusions. The Taguchi L{sub 4} method gave misleading results as it did not identify H{sub 2}S as having a positive effect on corrosion rate, and only identified the strong interactions in the experimental matrix. The Alternative Fractional design also yielded incorrect interpretations with regard to the effect of brine on corrosion. This study has shown that reduced experimental designs (e.g., half fractional) may be inappropriate for distinguishing the synergistic interactions likely to form in chemically reactive systems. Therefore, based upon the size of the data set collected in this work, the authors recommend that full factorial designs be used for corrosion evaluations. When the number of experimental variables make it impractical to perform a full factorial design, the aliasing relationships should be carefully evaluated.

  17. Response surface methodology and process optimization of sustained release pellets using Taguchi orthogonal array design and central composite design

    PubMed Central

    Singh, Gurinder; Pai, Roopa S.; Devi, V. Kusum

    2012-01-01

    Furosemide is a powerful diuretic and antihypertensive drug which has low bioavailability due to hepatic first pass metabolism and has a short half-life of 2 hours. To overcome the above drawback, the present study was carried out to formulate and evaluate sustained release (SR) pellets of furosemide for oral administration prepared by extrusion/spheronization. Drug Coat L-100 was used within the pellet core along with microcrystalline cellulose as the diluent and concentration of selected binder was optimized to be 1.2%. The formulation was prepared with drug to polymer ratio 1:3. It was optimized using Design of Experiments by employing a 32 central composite design that was used to systematically optimize the process parameters combined with response surface methodology. Dissolution studies were carried out with USP apparatus Type I (basket type) in both simulated gastric and intestinal pH. The statistical technique, i.e., the two-tailed paired t test and one-way ANOVA of in vitro data has proposed that there was very significant (P≤0.05) difference in dissolution profile of furosemide SR pellets when compared with pure drug and commercial product. Validation of the process optimization study indicated an extremely high degree of prognostic ability. The study effectively undertook the development of optimized process parameters of pelletization of furosemide pellets with tremendous SR characteristics. PMID:22470891

  18. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  19. Application of the Taguchi method in poultry science: estimation of the in vitro optimum intrinsic phytase activity of rye, wheat and barley.

    PubMed

    Sedghi, M; Golian, A; Esmaeilipour, O; Van Krimpen, M M

    2014-01-01

    1. In poultry investigations, the main interest is often to study the effects of many factors simultaneously. Two or three level factorial designs are the most commonly used for this type of investigation. However, it is often too costly to perform when number of factors increase. So a fractional factorial design, which is a subset or a fraction of a full factorial design, is an alternative. The Taguchi method has been proposed for simplifying and standardising fractional factorial designs. 2. An experiment was conducted to evaluate the applicability of the Taguchi method to optimise in vitro intrinsic phytase activity (IPA) of rye, wheat and barley under different culture conditions. 3. In order to have a solid base for judging the suitability of the Taguchi method, the results of the Taguchi method were compared with those of an experiment that was conducted as a 3(4) full factorial arrangement with three feed ingredients (rye, wheat and barley), three temperatures (20°C, 38°C and 55°C), three pH values (3.0, 5.5 and 8.0) and three incubation times (30, 60 and 120 min), with two replicates per treatment. 4. After data collection, a Taguchi L 9 (3(4)) orthogonal array was used to estimate the effects of different factors on the IPA, based on a subset of only 9 instead of 81 treatments. The data were analysed with both Taguchi and full factorial methods and the main effects and the optimal combinations of these 4 factors were obtained for each method. 5. The results indicated that according to both the full factorial experimental design and the Taguchi method, the optimal culture conditions were obtained with the following combination: rye, pH = 3, temperature = 20 °C and time of incubation = 30 min. The comparison between the Taguchi and full factorial results showed that the Taguchi method is a sufficient and resource saving alternative to the full factorial design in poultry science. PMID:24437370

  20. Experimental study of optimal self compacting concrete with spent foundry sand as partial replacement for M-sand using Taguchi approach

    NASA Astrophysics Data System (ADS)

    Nirmala, D. B.; Raviraj, S.

    2016-06-01

    This paper presents the application of Taguchi approach to obtain optimal mix proportion for Self Compacting Concrete (SCC) containing spent foundry sand and M-sand. Spent foundry sand is used as a partial replacement for M-sand. The SCC mix has seven control factors namely, Coarse aggregate, M-sand with Spent Foundry sand, Cement, Fly ash, Water, Super plasticizer and Viscosity modifying agent. Modified Nan Su method is used to proportion the initial SCC mix. L18 (21×37) Orthogonal Arrays (OA) with the seven control factors having 3 levels is used in Taguchi approach which resulted in 18 SCC mix proportions. All mixtures are extensively tested both in fresh and hardened states to verify whether they meet the practical and technical requirements of SCC. The quality characteristics considering "Nominal the better" situation is applied to the test results to arrive at the optimal SCC mix proportion. Test results indicate that the optimal mix satisfies the requirements of fresh and hardened properties of SCC. The study reveals the feasibility of using spent foundry sand as a partial replacement of M-sand in SCC and also that Taguchi method is a reliable tool to arrive at optimal mix proportion of SCC.

  1. Multi-response optimization using Taguchi design and principle component analysis for removing binary mixture of alizarin red and alizarin yellow from aqueous solution by nano γ-alumina

    NASA Astrophysics Data System (ADS)

    Zolgharnein, Javad; Asanjrani, Neda; Bagtash, Maryam; Azimi, Gholamhasan

    The nanostructure of γ-alumina was used as an effective adsorbent for simultaneous removing of a mixture of alizarin red and alizarin yellow from aqueous solutions. The Taguchi design and principle component analysis were applied to explore effective parameters for achieving a higher adsorption capacity and removal percentage of the binary mixture containing alizarin red and alizarin yellow. Seven factors including temperature, contact time, initial pH value, the shaker rate, the sorbent dose, and initial concentrations of alizarin red and alizarin yellow in three levels were considered through the Taguchi technique. A L27 orthogonal array was used to determine the signal-to-noise ratio. Then, the removal percentage (R%) and adsorption capacity (q) of the above-mentioned dyes were transformed into an accurate S/N ratio. The Taguchi method indicates that the solution pH has the most contribution in controlling the removal percentage of alizarin red and alizarin yellow. Under optimal condition, the maximum removal percentages of 99% and 78.5%, and the capacity uptake of 54.4 and 39.0 mg g-1 were obtained for both alizarin red and alizarin yellow, respectively. Isotherm modeling and kinetic investigations showed that Langmuir, modified Langmuir, and pseudo-second-order models describe both the adsorption equilibrium and kinetic behavior well. The Fourier transform infrared analysis also firmly confirmed the involving active sites of nano γ-alumina in the adsorption process.

  2. Multi-Response Optimization of Carbidic Austempered Ductile Iron Production Parameters using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Dhanapal, P.; Mohamed Nazirudeen, S. S.; Chandrasekar, A.

    2012-04-01

    Carbide Austempered Ductile Iron (CADI) is the family of ductile iron containing wear resistance alloy carbides in the ausferrite matrix. This CADI is manufactured by selecting and characterizing the proper material composition through the melting route done. In an effort to arrive the optimal production parameters of multi responses, Taguchi method and Grey relational analysis have been applied. To analyze the effect of production parameters on the mechanical properties signal-to-noise ratio and Grey relational grade have been calculated based on the design of experiments. An analysis of variance was calculated to find the amount of contribution of factors on mechanical properties and their significance. The analytical results of Taguchi method were compared with the experimental values, and it shows that both are identical.

  3. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. PMID:25541547

  4. Optimisation of Lime-Soda process parameters for reduction of hardness in aqua-hatchery practices using Taguchi methods.

    PubMed

    Yavalkar, S P; Bhole, A G; Babu, P V Vijay; Prakash, Chandra

    2012-04-01

    This paper presents the optimisation of Lime-Soda process parameters for the reduction of hardness in aqua-hatchery practices in the context of M. rosenbergii. The fresh water in the development of fisheries needs to be of suitable quality. Lack of desirable quality in available fresh water is generally the confronting restraint. On the Indian subcontinent, groundwater is the only source of raw water, having varying degree of hardness and thus is unsuitable for the fresh water prawn hatchery practices (M. rosenbergii). In order to make use of hard water in the context of aqua-hatchery, Lime-Soda process has been recommended. The efficacy of the various process parameters like lime, soda ash and detention time, on the reduction of hardness needs to be examined. This paper proposes to determine the parameter settings for the CIFE well water, which is pretty hard by using Taguchi experimental design method. Orthogonal Arrays of Taguchi, Signal-to-Noise Ratio, the analysis of variance (ANOVA) have been applied to determine their dosage and analysed for their effect on hardness reduction. The tests carried out with optimal levels of Lime-Soda process parameters confirmed the efficacy of the Taguchi optimisation method. Emphasis has been placed on optimisation of chemical doses required to reduce the total hardness using Taguchi method and ANOVA, to suit the available raw water quality for aqua-hatchery practices, especially for fresh water prawn M. rosenbergii. PMID:24749379

  5. Experimental and Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Cottrell, Edward B.

    With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…

  6. A simple procedure for optimising the polymerase chain reaction (PCR) using modified Taguchi methods.

    PubMed Central

    Cobb, B D; Clarkson, J M

    1994-01-01

    Taguchi methods are used widely as the basis for development trials during industrial process design. Here, we describe their suitability for optimisation of the PCR. Unlike conventional strategies, these arrays revealed the effects and interactions of specific reaction components simultaneously using just a few reactions, negating the need for extensive experimental investigation. Reaction components which effected product yield were easily determined. In addition, this technique was applied to the qualitative investigation of RAPD-PCR profiles, where optimisation of the size and distribution of a number of products was determined. Images PMID:7937094

  7. Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with Taguchi method

    NASA Astrophysics Data System (ADS)

    Mehedi, H.-A.; Baudrillart, B.; Alloyeau, D.; Mouhoub, O.; Ricolleau, C.; Pham, V. D.; Chacon, C.; Gicquel, A.; Lagoute, J.; Farhat, S.

    2016-08-01

    This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700-850 °C), molar concentration of methane (2%-20%), growth time (30-90 s), and microwave power (300-400 W) on graphene thickness and defect density is investigated using Taguchi method which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline quality and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2-7 high quality graphene layers.

  8. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  9. Catalyst performance study using Taguchi methods

    SciTech Connect

    Sims, G.S.; Johri, S.

    1988-01-01

    A study was conducted to determine the effects of various factors on the performance characteristics of aged monolithic catalytic converters. The factors that were evaluated were catalyst volume, converter configuration (number of elements), catalyst supplier washcoat technology, rhodium loading, platinum loading, and palladium loading. This study was also designed to evaluate the interactions among the various factors. To improve the efficiency of the study a 2-level fractional experiment was designed using the Taguchi method. That made it possible to study the effects of the seven main factors and six interactions by evaluating only 16 different samples. The study helped sort the factors that had significant effects and helped quantify their effect on catalyst performance. This paper details there methodology used to design the experiment and analyze the results.

  10. Taguchi Optimization of Pulsed Current GTA Welding Parameters for Improved Corrosion Resistance of 5083 Aluminum Welds

    NASA Astrophysics Data System (ADS)

    Rastkerdar, E.; Shamanian, M.; Saatchi, A.

    2013-04-01

    In this study, the Taguchi method was used as a design of experiment (DOE) technique to optimize the pulsed current gas tungsten arc welding (GTAW) parameters for improved pitting corrosion resistance of AA5083-H18 aluminum alloy welds. A L9 (34) orthogonal array of the Taguchi design was used, which involves nine experiments for four parameters: peak current ( P), base current ( B), percent pulse-on time ( T), and pulse frequency ( F) with three levels was used. Pitting corrosion resistance in 3.5 wt.% NaCl solution was evaluated by anodic polarization tests at room temperature and calculating the width of the passive region (∆ E pit). Analysis of variance (ANOVA) was performed on the measured data and S/ N (signal to noise) ratios. The "bigger is better" was selected as the quality characteristic (QC). The optimum conditions were found as 170 A, 85 A, 40%, and 6 Hz for P, B, T, and F factors, respectively. The study showed that the percent pulse-on time has the highest influence on the pitting corrosion resistance (50.48%) followed by pulse frequency (28.62%), peak current (11.05%) and base current (9.86%). The range of optimum ∆ E pit at optimum conditions with a confidence level of 90% was predicted to be between 174.81 and 177.74 mVSCE. Under optimum conditions, the confirmation test was carried out, and the experimental value of ∆ E pit of 176 mVSCE was in agreement with the predicted value from the Taguchi model. In this regard, the model can be effectively used to predict the ∆ E pit of pulsed current gas tungsten arc welded joints.

  11. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sudhakara, Dara; Prasanthi, Guvvala

    2016-08-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  12. Total Quality Management: Statistics and Graphics III - Experimental Design and Taguchi Methods. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schwabe, Robert A.

    Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…

  13. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  14. Teaching experimental design to biologists.

    PubMed

    Zolman, J F

    1999-12-01

    The teaching of research design and data analysis to our graduate students has been a persistent problem. A course is described in which students, early in their graduate training, obtain extensive practice in designing experiments and interpreting data. Lecture-discussions on the essentials of biostatistics are given, and then these essentials are repeatedly reviewed by illustrating their applications and misapplications in numerous research design problems. Students critique these designs and prepare similar problems for peer evaluation. In most problems the treatments are confounded by extraneous variables, proper controls may be absent, or data analysis may be incorrect. For each problem, students must decide whether the researchers' conclusions are valid and, if not, must identify a fatal experimental flaw. Students learn that an experiment is a well-conceived plan for data collection, analysis, and interpretation. They enjoy the interactive evaluations of research designs and appreciate the repetitive review of common flaws in different experiments. They also benefit from their practice in scientific writing and in critically evaluating their peers' designs. PMID:10644236

  15. Optimizing Aqua Splicer Parameters for Lycra-Cotton Core Spun Yarn Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Midha, Vinay Kumar; Hiremath, ShivKumar; Gupta, Vaibhav

    2015-10-01

    In this paper, optimization of the aqua splicer parameters viz opening time, splicing time, feed arm code (i.e. splice length) and duration of water joining was carried out for 37 tex lycra-cotton core spun yarn for better retained splice strength (RSS%), splice abrasion resistance (RYAR%) and splice appearance (RYA%) using Taguchi experimental design. It is observed that as opening time, splicing time and duration of water joining increase, the RSS% and RYAR% increases, whereas increase in feed arm code leads to decrease in both. The opening time and feed arm code do not have significant effect on RYA%. The optimum RSS% of 92.02 % was obtained at splicing parameters of 350 ms opening time, 180 ms splicing time, 65 feed arm code and 600 ms duration of water joining.

  16. Application of Taguchi method in Nd-YAG laser welding of super duplex stainless steel

    SciTech Connect

    Yip, W.M.; Man, H.C.; Ip, W.H.

    1996-12-31

    This investigation is aimed at achieving a near 50-50 % ferrite-austenite ratio of laser welded super duplex stainless steel, UNS S 32760 (Zeron 100). Bead-on-plate welding has been carried out using a 2 kW Nd-YAG laser with 3 different kinds of wave form, Continuous, Sine and Square wave. The weld metals were examined with respect to the phase volume contents by X-ray diffraction. Laser welding involved a large number of variables, interaction and levels of variables. Taguchi Method was selected and used to reduce the number of experimental conditions and to identify the dominant factors. The optimum combinations of controllable factors were found from each set of wave form. The optimum 40-60% ferrite-austenite ratio were realized on some of the combination parameter groups after using the Parameter Design method.

  17. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment. PMID:25541541

  18. Study on interaction between palladium(ІІ)-Linezolid chelate with eosin by resonance Rayleigh scattering, second order of scattering and frequency doubling scattering methods using Taguchi orthogonal array design

    NASA Astrophysics Data System (ADS)

    Thakkar, Disha; Gevriya, Bhavesh; Mashru, R. C.

    2014-03-01

    Linezolid reacted with palladium to form 1:1 binary cationic chelate which further reacted with eosin dye to form 1:1 ternary ion association complex at pH 4 of Walpole's acetate buffer in the presence of methyl cellulose. As a result not only absorption spectra were changed but Resonance Rayleigh Scattering (RRS), Second-order Scattering (SOS) and Frequency Doubling Scattering (FDS) intensities were greatly enhanced. The analytical wavelengths of RRS, SOS and FDS (λex/λem) of ternary complex were located at 538 nm/538 nm, 240 nm/480 nm and 660 nm/330 nm, respectively. The linearity range for RRS, SOS and FDS methods were 0.01-0.5 μg mL-1, 0.1-2 μg mL-1 and 0.2-1.8 μg mL-1, respectively. The sensitivity order of three methods was as RRS > SOS > FDS. Accuracy of all methods were determined by recovery studies and showed recovery between 98% and 102%. Intraday and inter day precision were checked for all methods and %RSD was found to be less than 2 for all methods. The effects of foreign substances were tested on RRS method and it showed the method had good selectivity. For optimization of process parameter, Taguchi orthogonal array design L8(24) was used and ANOVA was adopted to determine the statistically significant control factors that affect the scattering intensities of methods. The reaction mechanism, composition of ternary ion association complex and reasons for scattering intensity enhancement was discussed in this work.

  19. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  20. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  1. Multi-response optimization in the development of oleo-hydrophobic cotton fabric using Taguchi based grey relational analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Naseer; Kamal, Shahid; Raza, Zulfiqar Ali; Hussain, Tanveer; Anwar, Faiza

    2016-03-01

    Present study under takes multi-response optimization of water and oil repellent finishing of bleached cotton fabric under Taguchi based grey relational analysis. We considered three input variables, viz. concentrations of the finish (Oleophobol CP-C) and cross linking agent (Knittex FEL), and curing temperature. The responses included: water and oil contact angles, air permeability, crease recovery angle, stiffness, and tear and tensile strengths of the finished fabric. The experiments were conducted under L9 orthogonal array in Taguchi design. The grey relational analysis was also included to set the quality characteristics as reference sequence and to decide the optimal parameter combinations. Additionally, the analysis of variance was employed to determine the most significant factor. The results demonstrate great improvement in the desired quality parameters of the developed fabric. The optimization approach reported in this study could be effectively used to reduce expensive trial and error experimentation for new product development and process optimization involving multiple responses. The product optimized in this study was characterized by using advanced analytical techniques, and has potential applications in rainwear and other outdoor apparel.

  2. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  3. Quasi experimental designs in pharmacist intervention research.

    PubMed

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  4. GCFR shielding design and supporting experimental programs

    SciTech Connect

    Perkins, R.G.; Hamilton, C.J.; Bartine, D.

    1980-05-01

    The shielding for the conceptual design of the gas-cooled fast breeder reactor (GCFR) is described, and the component exposure design criteria which determine the shield design are presented. The experimental programs for validating the GCFR shielding design methods and data (which have been in existence since 1976) are also discussed.

  5. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  6. The photocatalytic degradation of cationic surfactant from wastewater in the presence of nano-zinc oxide using Taguchi method

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Moradidoost, A.; Bagherinia, M. A.; Taghavi, H.

    2013-12-01

    The photocatalytic degradation of cetyl pyridinium chloride (CPC) has been investigated in aqueous phase using ultraviolet (UV) and ZnO nanopowder. Kinetic analysis showed that the extent of surfactant photocatalytic degradation can be fitted with pseudo-first-order model and photochemical elimination of CPC could be studied by Taguchi method. Our experimental design was based on testing five factors, i.e., dosage of K2S2O8, concentration of CPC, amount of ZnO, irradiation time and initial pH. Each factor was tested at four levels. The optimum parameters were found to be pH 5.0; amount of ZnO 11 mg; K2S2O8 3 mM; CPC 10 mg/L; irradiation time, 8 h.

  7. Parameter optimization of nanosecond laser for microdrilling on PVC by Taguchi method

    NASA Astrophysics Data System (ADS)

    Canel, Timur; Kaya, A. Uğur; Çelik, Bekir

    2012-11-01

    Formation of cavities having maximum aspect ratio (depth-to-width (D/W) ratio) on PVC during laser drilling has several undesirable outcomes with regard to cavity quality. Hence it is essential to select optimum drilling process parameters to maximize aspect ratio and minimize Heat Affected Zone (HAZ) and circularity. This paper presents application of the Taguchi optimization method to obtain cavities possessing maximum aspect ratio influenced by drilling conditions such as wavelength, fluence and frequency. In the present work, the effects of laser processing parameters, including laser fluence, laser frequency and wavelength were investigated in relation to the aspect ratio, HAZ and circularity. Then the optimal values of wavelength, fluence and frequency were determined. According to the result of the confirmation experiment using optimum parameters, it was observed that experimental results were compatible with Taguchi method with 93% rate. The details of experimentation analysis and analysis of variance are presented in this paper.

  8. More efficiency in fuel consumption using gearbox optimization based on Taguchi method

    NASA Astrophysics Data System (ADS)

    Goharimanesh, Masoud; Akbari, Aliakbar; Akbarzadeh Tootoonchi, Alireza

    2014-05-01

    Automotive emission is becoming a critical threat to today's human health. Many researchers are studying engine designs leading to less fuel consumption. Gearbox selection plays a key role in an engine design. In this study, Taguchi quality engineering method is employed, and optimum gear ratios in a five speed gear box is obtained. A table of various gear ratios is suggested by design of experiment techniques. Fuel consumption is calculated through simulating the corresponding combustion dynamics model. Using a 95 % confidence level, optimal parameter combinations are determined using the Taguchi method. The level of importance of the parameters on the fuel efficiency is resolved using the analysis of signal-to-noise ratio as well as analysis of variance.

  9. Simultaneous optimization of diesel engine parameters for low emissions using Taguchi methods

    SciTech Connect

    Hunter, C.E.; Gardner, T.P.; Zakrajsek, C.E.

    1990-01-01

    This paper describes a study which was conducted to simultaneously optimize several diesel engine design and operating parameters for low exhaust emissions using the Taguchi method. A single cylinder, research, diesel engine equipped with a high pressure, cam-driven, electronic unit injector was used in this optimization experiment. The major effects of key engine design parameters on exhaust emissions were quantified and optimum parameter settings were determined. Measurement of exhaust emissions using the optimum parameter settings showed that particulates and NO{sub x} emissions were significantly lower than those obtained for the baseline engine. The Taguchi method was found to be a useful technique for the simultaneous optimization of several engine parameters and for predicting the effect of various design parameters on diesel exhaust emissions.

  10. Experimental design of a waste glass study

    SciTech Connect

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  11. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  12. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148

  13. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  14. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  15. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  16. Taguchi Optimization on the Initial Thickness and Pre-aging of Nano-/Ultrafine-Grained Al-0.2 wt.%Sc Alloy Produced by ARB

    NASA Astrophysics Data System (ADS)

    Yousefieh, Mohammad; Tamizifar, Morteza; Boutorabi, Seyed Mohammad Ali; Borhani, Ehsan

    2016-08-01

    In this study, Taguchi design method with L9 orthogonal array has been used to optimize the initial thickness and pre-aging parameters (temperature and time) for the mechanical properties of Al-0.2 wt.% Sc alloy heavily deformed by accumulative roll bonding (ARB) up to ten cycles. Analysis of variance was performed on the measured data and signal-to-noise ratios. It was found that the pre-aging temperature has the most significant parameter affecting the mechanical properties by percentage contribution of 64.51%. Pre-aging time (19.29%) has the next most significant effect, while initial thickness (5.31%) has statistically less significant effect. In order to confirm experimental conclusions, verification experiments were carried out at optimum working conditions. Under these conditions, the yield strength was 6.51 times higher and toughness was 6.86% lower compared with the starting Al-Sc material. Moreover, mean grain size was decreased to 220 nm by setting the control parameters, which was the lowest value obtained in this study. It was concluded that the Taguchi method was found to be a promising technique to obtain the optimum conditions for such studies. Consequently, by controlling the parameter levels, the high-strength and high-toughness Al-Sc samples were fabricated through pre-aging and subsequent ARB process.

  17. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469]. PMID:19786177

  18. Surface Roughness Prediction Model using Zirconia Toughened Alumina (ZTA) Turning Inserts: Taguchi Method and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Mandal, Nilrudra; Doloi, Biswanath; Mondal, Biswanath

    2016-01-01

    In the present study, an attempt has been made to apply the Taguchi parameter design method and regression analysis for optimizing the cutting conditions on surface finish while machining AISI 4340 steel with the help of the newly developed yttria based Zirconia Toughened Alumina (ZTA) inserts. These inserts are prepared through wet chemical co-precipitation route followed by powder metallurgy process. Experiments have been carried out based on an orthogonal array L9 with three parameters (cutting speed, depth of cut and feed rate) at three levels (low, medium and high). Based on the mean response and signal to noise ratio (SNR), the best optimal cutting condition has been arrived at A3B1C1 i.e. cutting speed is 420 m/min, depth of cut is 0.5 mm and feed rate is 0.12 m/min considering the condition smaller is the better approach. Analysis of Variance (ANOVA) is applied to find out the significance and percentage contribution of each parameter. The mathematical model of surface roughness has been developed using regression analysis as a function of the above mentioned independent variables. The predicted values from the developed model and experimental values are found to be very close to each other justifying the significance of the model. A confirmation run has been carried out with 95 % confidence level to verify the optimized result and the values obtained are within the prescribed limit.

  19. Application of Taguchi's method to optimize fiber Raman amplifier

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif

    2016-04-01

    Taguchi's method is introduced to perform multiobjective optimization of fiber Raman amplifier (FRA). The optimization requirements are to maximize gain and keep gain ripple minimum over the operating bandwidth of a wavelength division multiplexed (WDM) communication link. Mathematical formulations of FRA and corresponding numerical solution techniques are discussed. A general description of Taguchi's method and how it can be integrated with the FRA optimization problem are presented. The proposed method is used to optimize two different configurations of FRA. The performance of Taguchi's method is compared with genetic algorithm and particle swarm optimization in terms of output performance and convergence rate. Taguchi's method is found to produce good results with fast convergence rate, which makes it well suited for the nonlinear optimization problems.

  20. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  1. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  2. Rational Experimental Design for Electrical Resistivity Imaging

    NASA Astrophysics Data System (ADS)

    Mitchell, V.; Pidlisecky, A.; Knight, R.

    2008-12-01

    Over the past several decades advances in the acquisition and processing of electrical resistivity data, through multi-channel acquisition systems and new inversion algorithms, have greatly increased the value of these data to near-surface environmental and hydrological problems. There has, however, been relatively little advancement in the design of actual surveys. Data acquisition still typically involves using a small number of traditional arrays (e.g. Wenner, Schlumberger) despite a demonstrated improvement in data quality from the use of non-standard arrays. While optimized experimental design has been widely studied in applied mathematics and the physical and biological sciences, it is rarely implemented for non-linear problems, such as electrical resistivity imaging (ERI). We focus specifically on using ERI in the field for monitoring changes in the subsurface electrical resistivity structure. For this application we seek an experimental design method that can be used in the field to modify the data acquisition scheme (spatial and temporal sampling) based on prior knowledge of the site and/or knowledge gained during the imaging experiment. Some recent studies have investigated optimized design of electrical resistivity surveys by linearizing the problem or with computationally-intensive search algorithms. We propose a method for rational experimental design based on the concept of informed imaging, the use of prior information regarding subsurface properties and processes to develop problem-specific data acquisition and inversion schemes. Specifically, we use realistic subsurface resistivity models to aid in choosing source configurations that maximize the information content of our data. Our approach is based on first assessing the current density within a region of interest, in order to provide sufficient energy to the region of interest to overcome a noise threshold, and then evaluating the direction of current vectors, in order to maximize the

  3. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims. PMID:16161223

  4. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  5. Simulation as an aid to experimental design

    SciTech Connect

    Frazer, J.W.; Balaban, D.J.; Wang, J.L.

    1983-05-01

    A simulator of chemical reactions can aid the scientist in the design of experimentation. They are of great value when studying enzymatic kinetic reactions. One such simulator is a numerical ordinary differential equation solver which uses interactive graphics to provide the user with the capability to simulate an extremely wide range of enzyme reaction conditions for many types of single substrate reactions. The concentration vs. time profiles of any subset or all nine states of a complex reaction can be displayed with and without simulated instrumental noise. Thus the user can estimate the practicality of any proposed experimentation given known instrumental noise. The experimenter can readily determine which state provides the most information related to the proposed kinetic parameters and mechanism. A general discussion of the program including the nondimensionalization of the set of differential equations is included. Finally, several simulation examples are shown and the results discussed.

  6. Parameter estimation and optimal experimental design.

    PubMed

    Banga, Julio R; Balsa-Canto, Eva

    2008-01-01

    Mathematical models are central in systems biology and provide new ways to understand the function of biological systems, helping in the generation of novel and testable hypotheses, and supporting a rational framework for possible ways of intervention, like in e.g. genetic engineering, drug development or treatment of diseases. Since the amount and quality of experimental 'omics' data continue to increase rapidly, there is great need for methods for proper model building which can handle this complexity. In the present chapter we review two key steps of the model building process, namely parameter estimation (model calibration) and optimal experimental design. Parameter estimation aims to find the unknown parameters of the model which give the best fit to a set of experimental data. Optimal experimental design aims to devise the dynamic experiments which provide the maximum information content for subsequent non-linear model identification, estimation and/or discrimination. We place emphasis on the need for robust global optimization methods for proper solution of these problems, and we present a motivating example considering a cell signalling model. PMID:18793133

  7. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  8. Application of Taguchi based Response Surface Method (TRSM) for Optimization of Multi Responses in Drilling Al/SiC/Al2O3 Hybrid Composite

    NASA Astrophysics Data System (ADS)

    Adalarasan, R.; Santhanakumar, M.

    2015-01-01

    The emerging industrial applications of second generation hybrid composites demand an organised study of their drilling characteristics as drilling is an essential metal removal process in the final fabrication stage. In the present work, surface finish and burr height were observed while drilling Al6061/SiC/Al2O3 composite for various combinations of drilling parameters like the feed rate, spindle speed and point angle of tool. The experimental trials were designed by L18 orthogonal array and Taguchi based response surface method was presented for optimizing the drilling parameters. The significant improvements in the responses observed for the optimal parameter setting has validated the TRSM approach permitting its application in other areas of manufacturing.

  9. Development of multi-pass weld condition for high strength steel using Taguchi method

    SciTech Connect

    Kim, S.H.

    1995-12-01

    The mechanical tests (tensile strength, impact toughness) are performed to develop a weld conditions for high strength steel. The effects of heat input, weld geometry (root face, root gap, groove angle), electrode type, plate thickness are experimentally analyzed using Taguchi method with an orthogonal L18(2{sup 1} {times} 3{sup 7}) array. From the experiments and the ANOVA analysis, effects of the main factors as well as the interactions between any two factors are quantitatively analyzed and the equations for the mechanical properties as functions of the weld conditions are derived.

  10. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  11. Optimization of parameters for the synthesis of Y2Cu2O5 nanoparticles by Taguchi method and comparison of their magnetic and optical properties with their bulk counterpart

    NASA Astrophysics Data System (ADS)

    Farbod, Mansoor; Rafati, Zahra; Shoushtari, Morteza Zargar

    2016-06-01

    Y2Cu2O5 nanoparticles were synthesized by sol-gel combustion method and effects of different factors on the size of nanoparticles were investigated. In order to reduce the experimental stages, Taguchi robust design method was employed. Acid citric:Cu+2 M ratio, pH, sintering temperature and time were chosen as the parameters for optimization. Among these factors the solution pH had the most influence and the others had nearly the same influence on the nanoparticles sizes. Based on the predicted conditions by Taguchi design, the sample with a minimum particle size of 47 nm was prepared. The magnetic behavior of Y2Cu2O5 nanoparticles were measured and found that at low fields they are soft ferromagnetic but at high fields they behave paramagnetically. The magnetic behavior of nanoparticles were compared to their bulk counterparts and found that the Mr of the samples was slightly different, but the Hc of the nanoparticles was 76% of the bulk sample. The maximum absorbance peak of UV-vis spectrum showed a blue shift for the smaller particles.

  12. Identification of Dysfunctional Cooperative Learning Teams Using Taguchi Quality Indexes

    ERIC Educational Resources Information Center

    Hsiung, Chin-Min

    2011-01-01

    In this study, dysfunctional cooperative learning teams are identified by comparing the Taguchi "larger-the-better" quality index for the academic achievement of students in a cooperative learning condition with that of students in an individualistic learning condition. In performing the experiments, 42 sophomore mechanical engineering students…

  13. Optimization of Nanostructuring Burnishing Technological Parameters by Taguchi Method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, V. P.; Dmitriev, A. I.; Anisimova, G. S.; Semenova, Yu V.

    2016-04-01

    On the basis of application of Taguchi optimization method, an approach for researching influence of nanostructuring burnishing technological parameters, considering the surface layer microhardness criterion, is developed. Optimal values of burnishing force, feed and number of tool passes for hardened steel AISI 420 hardening treatment are defined.

  14. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities. PMID:23949665

  15. Preparation of photocatalytic ZnO nanoparticles and application in photochemical degradation of betamethasone sodium phosphate using taguchi approach

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Farajpour, G.; Taghavi, H.; Shokri, S.

    2014-07-01

    In this study, ZnO nanoparticles were prepared by a sol-gel method for the first time. Taguchi method was used to identify the several factors that may affect degradation percentage of betamethasone sodium phosphate in wastewater in UV/K2S2O8/nano-ZnO system. Our experimental design consisted of testing five factors, i.e., dosage of K2S2O8, concentration of betamethasone sodium phosphate, amount of ZnO, irradiation time and initial pH. With four levels of each factor tested. It was found that, optimum parameters are irradiation time, 180 min; pH 9.0; betamethasone sodium phosphate, 30 mg/L; amount of ZnO, 13 mg; K2S2O8, 1 mM. The percentage contribution of each factor was determined by the analysis of variance (ANOVA). The results showed that irradiation time; pH; amount of ZnO; drug concentration and dosage of K2S2O8 contributed by 46.73, 28.56, 11.56, 6.70, and 6.44%, respectively. Finally, the kinetics process was studied and the photodegradation rate of betamethasone sodium phosphate was found to obey pseudo-first-order kinetics equation represented by the Langmuir-Hinshelwood model.

  16. Enhancement of process capability for strip force of tight sets of optical fiber using Taguchi's Quality Engineering

    NASA Astrophysics Data System (ADS)

    Lin, Wen-Tsann; Wang, Shen-Tsu; Li, Meng-Hua; Huang, Chiao-Tzu

    2012-03-01

    Strip force is the key to identifying the quality of product during manufacturing tight sets of fiber. This study used Integrated computer-aided manufacturing DEFinition 0 (IDEF0) modeling to discuss detailed cladding processes of tight sets of fiber in transnational optical connector manufacturing. The results showed that, the key factor causing an instable interface connection is the extruder adjustment process. The factors causing improper strip force were analyzed through literature, practice, and gray relational analysis. The parameters design method of Taguchi's Quality Engineering was used to determine the optimal experimental combinations for processes of tight sets of fiber. This study employed case empirical analysis to obtain a model for improving the process of strip force of tight sets of fiber, and determines the correlation factors that affect the processes of quality for tight sets of fiber. The findings indicated that, process capability index (CPK) increased significantly, which can facilitate improvement of the product process capability and quality. The empirical results can serve as a reference for improving the product quality of the optical fiber industry.

  17. Effect of olive mill waste addition on the properties of porous fired clay bricks using Taguchi method.

    PubMed

    Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman

    2016-10-01

    Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, Taguchi method, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production. PMID:27343435

  18. Laser Doppler vibrometer: unique use of DOE/Taguchi methodologies in the arena of pyroshock (10 to 100,000 HZ) response spectrum

    NASA Astrophysics Data System (ADS)

    Litz, C. J., Jr.

    1994-09-01

    Discussed is the unique application of design of experiment (DOE) to structure and test a Taguchi L9 (32) factorial experimental matrix (nine tests to study two factors, each factor at three levels), utilizing an HeNe laser Doppler vibrometer and piezocrystal accelerometers to monitor the explosively induced vibrations through the frequency range of 10 to 105 Hz on a flat steel plate (96 X 48 X 0.25 in.). An initial discussion is presented of pyrotechnic shock, or pyroshock, which is a short-duration, high-amplitude, high-frequency transient structural response in aerospace vehicle structures following firing of an ordnance item to separate, sever missile skin, or release a structural member. The development of the shock response spectra (SRS) is detailed. The use of a laser doppler for generating velocity- acceleration-time histories near and at a separation distance from the explosive and the resulting generated shock response spectra plots is detailed together with the laser doppler vibrometer setup as used. The use of DOE/Taguchi as a means of generating performance metrics, prediction equations, and response surface plots is presented as a means to statistically compare and rate the performance of the NeHe laser Doppler vibrometer with respect to two different piezoelectric crystal accelerometers of the contact type mounted directly to the test plate at the frequencies in the 300, 3000, and 10,000 Hz range. Specific constructive conclusions and recommendations are presented on the totally new dimension of understanding the pyroshock phenomenon with respect to the effects and interrelationships of explosive charge weight, location, and the laser Doppler recording system. The use of these valuable statistical tools on other experiments can be cost-effective and provide valuable insight to aid understanding of testing or process control by the engineering community. The superiority of the HeNe laser Doppler vibrometer performance is demonstrated.

  19. [Design and experimentation of marine optical buoy].

    PubMed

    Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying

    2009-02-01

    Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months. PMID:19445253

  20. Inulinase production by Geotrichum candidum OC-7 using migratory locusts as a new substrate and optimization process with Taguchi DOE.

    PubMed

    Canli, Ozden; Tasar, Gani Erhan; Taskin, Mesut

    2013-09-01

    Utilization of migratory locusts (Locusta migratoria) as a main substrate due to its high protein content for inulinase (2,1-β-d-fructan fructanohydrolase) production by Geotrichum candidum OC-7 was investigated in this study. To optimize fermentation conditions, four influential factors (locust powder (LP) concentration, sucrose concentration, pH and fermentation time) at three levels were investigated using Taguchi orthogonal array (OA) design of experiment (DOE). Inulinase yield obtained from the designed experiments with regard to Taguchi L9 OA was processed with Minitab 15 software at 'larger is better' as quality character. The results showed that optimal fermentation conditions determined as LP 30 g/l, sucrose 20 g/l, pH 6.0 and time 48 h. Maximum inulinase activity was recorded as 30.12 U/ml, which was closer to the predicted value (30.56 U/ml). To verify the results, analysis of variance test was employed. LP had the greatest contribution (71.96%) among the other factors. Sucrose had lower contribution (13.96%) than LP. This result demonstrated that LP had a strong effect on inulinase activity and can be used for enzyme production. Taguchi DOE application enhanced enzyme activity to about 3.05-fold versus unoptimized condition and 2.34-fold versus control medium. Consequently, higher inulinase production can be achieved by the utilization of an edible insect material as an alternative substrate and Taguchi DOE presents suitable optimization method for biotechnological process. PMID:22495518

  1. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  2. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  3. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  4. Multiresponse Optimization of Laser Cladding Steel + VC Using Grey Relational Analysis in the Taguchi Method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Kovacevic, Radovan

    2016-05-01

    Laser cladding of metal matrix composite coatings (MMCs) has become an effective and economic method to improve the wear resistance of mechanical components. The clad quality characteristics such as clad height, carbide fraction, carbide dissolution, and matrix hardness in MMCs determine the wear resistance of the coatings. These clad quality characteristics are influenced greatly by the laser cladding processing parameters. In this study, American Iron and Steel Institute (AISI) 420 + 20% vanadium carbide (VC) was deposited on mild steel with a high powder direct diode laser. The Taguchi-based Grey relational method was used to optimize the laser cladding processing parameters (laser power, scanning speed, and powder feed rate) with the consideration of multiple clad characteristics related to wear resistance (clad height, carbide volume fraction, and Fe-matrix hardness). A Taguchi L9 orthogonal array was designed to study the effects of processing parameters on each response. The contribution and significance of each processing parameter on each clad characteristic were investigated by the analysis of variance (ANOVA). The Grey relational grade acquired from Grey relational analysis was used as the performance characteristic to obtain the optimal combination of processing parameters. Based on the optimal processing parameters, the phases and microstructure of the laser-cladded coating were characterized by using x-ray diffraction (XRD) and scanning electron microscopy (SEM) with energy-dispersive spectroscopy (EDS).

  5. Multiresponse Optimization of Laser Cladding Steel + VC Using Grey Relational Analysis in the Taguchi Method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Kovacevic, Radovan

    2016-07-01

    Laser cladding of metal matrix composite coatings (MMCs) has become an effective and economic method to improve the wear resistance of mechanical components. The clad quality characteristics such as clad height, carbide fraction, carbide dissolution, and matrix hardness in MMCs determine the wear resistance of the coatings. These clad quality characteristics are influenced greatly by the laser cladding processing parameters. In this study, American Iron and Steel Institute (AISI) 420 + 20% vanadium carbide (VC) was deposited on mild steel with a high powder direct diode laser. The Taguchi-based Grey relational method was used to optimize the laser cladding processing parameters (laser power, scanning speed, and powder feed rate) with the consideration of multiple clad characteristics related to wear resistance (clad height, carbide volume fraction, and Fe-matrix hardness). A Taguchi L9 orthogonal array was designed to study the effects of processing parameters on each response. The contribution and significance of each processing parameter on each clad characteristic were investigated by the analysis of variance (ANOVA). The Grey relational grade acquired from Grey relational analysis was used as the performance characteristic to obtain the optimal combination of processing parameters. Based on the optimal processing parameters, the phases and microstructure of the laser-cladded coating were characterized by using x-ray diffraction (XRD) and scanning electron microscopy (SEM) with energy-dispersive spectroscopy (EDS).

  6. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-05-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  7. EXPERIMENTAL DESIGN: STATISTICAL CONSIDERATIONS AND ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this book chapter, information on how field experiments in invertebrate pathology are designed and the data collected, analyzed, and interpreted is presented. The practical and statistical issues that need to be considered and the rationale and assumptions behind different designs or procedures ...

  8. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  9. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  10. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  11. Optimization of lactic acid production in SSF by Lactobacillus amylovorus NRRL B-4542 using Taguchi methodology.

    PubMed

    Nagarjun, Pyde Acharya; Rao, Ravella Sreenivas; Rajesham, Swargam; Rao, Linga Venkateswar

    2005-02-01

    Lactic acid production parameter optimization using Lactobacillus amylovorus NRRL B-4542 was performed using the design of experiments (DOE) available in the form of an orthogonal array and a software for automatic design and analysis of the experiments, both based on Taguchi protocol. Optimal levels of physical parameters and key media components namely temperature, pH, inoculum size, moisture, yeast extract, MgSO4 . 7H20, Tween 80, and corn steep liquor (CSL) were determined. Among the physical parameters, temperature contributed higher influence, and among media components, yeast extract, MgSO4 . 7H20, and Tween 80 played important roles in the conversion of starch to lactic acid. The expected yield of lactic acid under these optimal conditions was 95.80% and the actual yield at optimum conditions was 93.50%. PMID:15765056

  12. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  13. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  14. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  15. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  16. Influence of process parameters on the content of biomimetic calcium phosphate coating on titanium: a Taguchi analysis.

    PubMed

    Thammarakcharoen, Faungchat; Suvannapruk, Waraporn; Suwanprateeb, Jintamai

    2014-10-01

    In this study, a statistical design of experimental methodology based on Taguchi orthogonal design has been used to study the effect of various processing parameters on the amount of calcium phosphate coating produced by such technique. Seven control factors with three levels each including sodium hydroxide concentration, pretreatment temperature, pretreatment time, cleaning method, coating time, coating temperature and surface area to solution volume ratio were studied. X-ray diffraction revealed that all the coatings consisted of the mixture of octacalcium phosphate (OCP) and hydroxyapatite (HA) and the presence of each phase depended on the process conditions used. Various content and size (-1-100 μm) of isolated spheroid particles with nanosized plate-like morphology deposited on the titanium surface or a continuous layer of plate-like nanocrystals having the plate thickness in the range of -100-300 nm and the plate width in the range of 3-8 μm were formed depending on the process conditions employed. The optimum condition of using sodium hydroxide concentration of 1 M, pretreatment temperature of 70 degrees C, pretreatment time of 24 h, cleaning by ultrasonic, coating time of 6 h, coating temperature of 50 degrees C and surface area to solution volume ratio of 32.74 for producing the greatest amount of the coating formed on the titanium surface was predicted and validated. In addition, coating temperature was found to be the dominant factor with the greatest contribution to the coating formation while coating time and cleaning method were significant factors. Other factors had negligible effects on the coating performance. PMID:25942836

  17. Process improvement in laser hot wire cladding for martensitic stainless steel based on the Taguchi method

    NASA Astrophysics Data System (ADS)

    Huang, Zilin; Wang, Gang; Wei, Shaopeng; Li, Changhong; Rong, Yiming

    2016-07-01

    Laser hot wire cladding, with the prominent features of low heat input, high energy efficiency, and high precision, is widely used for remanufacturing metal parts. The cladding process, however, needs to be improved by using a quantitative method. In this work, volumetric defect ratio was proposed as the criterion to describe the integrity of forming quality for cladding layers. Laser deposition experiments with FV520B, one of martensitic stainless steels, were designed by using the Taguchi method. Four process variables, namely, laser power (P), scanning speed (V s), wire feed rate (V f), and wire current (I), were optimized based on the analysis of signal-to-noise (S/N) ratio. Metallurgic observation of cladding layer was conducted to compare the forming quality and to validate the analysis method. A stable and continuous process with the optimum parameter combination produced uniform microstructure with minimal defects and cracks, which resulted in a good metallurgical bonding interface.

  18. Nitric acid treated multi-walled carbon nanotubes optimized by Taguchi method

    NASA Astrophysics Data System (ADS)

    Shamsuddin, Shahidah Arina; Derman, Mohd Nazree; Hashim, Uda; Kashif, Muhammad; Adam, Tijjani; Halim, Nur Hamidah Abdul; Tahir, Muhammad Faheem Mohd

    2016-07-01

    Electron transfer rate (ETR) of CNTs can be enhanced by increasing the amounts of COOH groups to their wall and opened tips. With the aim to achieve the highest production amount of COOH, Taguchi robust design has been used for the first time to optimize the surface modification of MWCNTs by nitric acid oxidation. Three main oxidation parameters which are concentration of acid, treatment temperature and treatment time have been selected as the control factors that will be optimized. The amounts of COOH produced are measured by using FTIR spectroscopy through the absorbance intensity. From the analysis, we found that acid concentration and treatment time had the most important influence on the production of COOH. Meanwhile, the treatment temperature will only give intermediate effect. The optimum amount of COOH can be achieved with the treatment by 8.0 M concentration of nitric acid at 120 °C for 2 hour.

  19. Study of Titanium Alloy Sheet During H-sectioned Rolling Forming Using the Taguchi Method

    SciTech Connect

    Chen, D.-C.; Gu, W.-S.; Hwang, Y.-M.

    2007-05-17

    This study employs commercial DEFORM three-dimensional finite element code to investigate the plastic deformation behavior of Ti-6Al-4V titanium alloy sheet during the H-sectioned rolling process. The simulations are based on a rigid-plastic model and assume that the upper and lower rolls are rigid bodies and that the temperature rise induced during rolling is sufficiently small that it can be ignored. The effects of the roll profile, the friction factor between the rolls and the titanium alloy, the rolling temperature and the roll radii on the rolling force, the roll torque and the effective strain induced in the rolled product are examined. The Taguchi method is employed to optimize the H-sectioned rolling process parameters. The results confirm the effectiveness of this robust design methodology in optimizing the H-sectioned rolling process parameters for the current Ti-6Al-4V titanium alloy.

  20. Analysis of spinal lumbar interbody fusion cage subsidence using Taguchi method, finite element analysis, and artificial neural network

    NASA Astrophysics Data System (ADS)

    Nassau, Christopher John; Litofsky, N. Scott; Lin, Yuyi

    2012-09-01

    Subsidence, when implant penetration induces failure of the vertebral body, occurs commonly after spinal reconstruction. Anterior lumbar interbody fusion (ALIF) cages may subside into the vertebral body and lead to kyphotic deformity. No previous studies have utilized an artificial neural network (ANN) for the design of a spinal interbody fusion cage. In this study, the neural network was applied after initiation from a Taguchi L 18 orthogonal design array. Three-dimensional finite element analysis (FEA) was performed to address the resistance to subsidence based on the design changes of the material and cage contact region, including design of the ridges and size of the graft area. The calculated subsidence is derived from the ANN objective function which is defined as the resulting maximum von Mises stress (VMS) on the surface of a simulated bone body after axial compressive loading. The ANN was found to have minimized the bone surface VMS, thereby optimizing the ALIF cage given the design space. Therefore, the Taguchi-FEA-ANN approach can serve as an effective procedure for designing a spinal fusion cage and improving the biomechanical properties.

  1. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    PubMed

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  2. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  3. Evaluation with an Experimental Design: The Emergency School Assistance Program.

    ERIC Educational Resources Information Center

    Crain, Robert L.; York, Robert L.

    The Evaluation of the Emergency School Assistance Program (ESAP) for the 1971-72 school year is the first application of full-blown experimental design with randomized experimental and control cases in a federal evaluation of a large scale program. It is also one of the very few evaluations which has shown that federal programs can raise tested…

  4. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  5. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  6. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    PubMed

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface. PMID:16929931

  7. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  8. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  9. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  10. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  11. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  12. Design and Experimental Study on Spinning Solid Rocket Motor

    NASA Astrophysics Data System (ADS)

    Xue, Heng; Jiang, Chunlan; Wang, Zaicheng

    The study on spinning solid rocket motor (SRM) which used as power plant of twice throwing structure of aerial submunition was introduced. This kind of SRM which with the structure of tangential multi-nozzle consists of a combustion chamber, propellant charge, 4 tangential nozzles, ignition device, etc. Grain design, structure design and prediction of interior ballistic performance were described, and problem which need mainly considered in design were analyzed comprehensively. Finally, in order to research working performance of the SRM, measure pressure-time curve and its speed, static test and dynamic test were conducted respectively. And then calculated values and experimental data were compared and analyzed. The results indicate that the designed motor operates normally, and the stable performance of interior ballistic meet demands. And experimental results have the guidance meaning for the pre-research design of SRM.

  13. Decision-oriented Optimal Experimental Design and Data Collection

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Classical optimal experimental design is a branch of statistics that seeks to construct ("design") a data collection effort ("experiment") that minimizes ("optimal") the uncertainty associated with some quantity of interest. In many real world problems, we are interested in these quantities to help us make a decision. Minimizing the uncertainty associated with the quantity can help inform the decision, but a more holistic approach is possible where the experiment is designed to maximize the information that it provides to the decision-making process. The difference is subtle, but it amounts to focusing on the end-goal (the decision) rather than an intermediary (the quantity). We describe one approach to decision-oriented optimal experimental design that utilizes Bayesian-Information-Gap Decision Theory which combines probabilistic and non-probabilistic methods for uncertainty quantification. In this approach, experimental designs that have a high probability of altering the decision are deemed worthwhile. On the other hand, experimental designs that have little chance or no of altering the decision need not be performed.

  14. Phylogenetic information and experimental design in molecular systematics.

    PubMed Central

    Goldman, N

    1998-01-01

    Despite the widespread perception that evolutionary inference from molecular sequences is a statistical problem, there has been very little attention paid to questions of experimental design. Previous consideration of this topic has led to little more than an empirical folklore regarding the choice of suitable genes for analysis, and to dispute over the best choice of taxa for inclusion in data sets. I introduce what I believe are new methods that permit the quantification of phylogenetic information in a sequence alignment. The methods use likelihood calculations based on Markov-process models of nucleotide substitution allied with phylogenetic trees, and allow a general approach to optimal experimental design. Two examples are given, illustrating realistic problems in experimental design in molecular phylogenetics and suggesting more general conclusions about the choice of genomic regions, sequence lengths and taxa for evolutionary studies. PMID:9787470

  15. A comparison of controller designs for an experimental flexible structure

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Maghami, P. G.; Joshi, S. M.

    1991-01-01

    Control systems design and hardware testing are addressed for an experimental structure that displays the characteristics of a typical flexible spacecraft. The results of designing and implementing various control design methodologies are described. The design methodologies under investigation include linear quadratic Gaussian control, static and dynamic dissipative controls, and H-infinity optimal control. Among the three controllers considered, it is shown, through computer simulation and laboratory experiments on the evolutionary structure, that the dynamic dissipative controller gave the best results in terms of vibration suppression and robustness with respect to modeling errors.

  16. Optimizing experimental design for comparing models of brain function.

    PubMed

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-11-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  17. Optimization of delignification of two Pennisetum grass species by NaOH pretreatment using Taguchi and ANN statistical approach.

    PubMed

    Mohaptra, Sonali; Dash, Preeti Krishna; Behera, Sudhanshu Shekar; Thatoi, Hrudayanath

    2016-01-01

    In the bioconversion of lignocelluloses for bioethanol, pretreatment seems to be the most important step which improves the elimination of the lignin and hemicelluloses content, exposing cellulose to further hydrolysis. The present study discusses the application of dynamic statistical techniques like the Taguchi method and artificial neural network (ANN) in the optimization of pretreatment of lignocellulosic biomasses such as Hybrid Napier grass (HNG) (Pennisetum purpureum) and Denanath grass (DG) (Pennisetum pedicellatum), using alkali sodium hydroxide. This study analysed and determined a parameter combination with a low number of experiments by using the Taguchi method in which both the substrates can be efficiently pretreated. The optimized parameters obtained from the L16 orthogonal array are soaking time (18 and 26 h), temperature (60°C and 55°C), and alkali concentration (1%) for HNG and DG, respectively. High performance liquid chromatography analysis of the optimized pretreated grass varieties confirmed the presence of glucan (47.94% and 46.50%), xylan (9.35% and 7.95%), arabinan (2.15% and 2.2%), and galactan/mannan (1.44% and 1.52%) for HNG and DG, respectively. Physicochemical characterization studies of native and alkali-pretreated grasses were carried out by scanning electron microscopy and Fourier transformation Infrared spectroscopy which revealed some morphological differences between the native and optimized pretreated samples. Model validation by ANN showed a good agreement between experimental results and the predicted responses. PMID:26584152

  18. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  19. Application of Taguchi approach to optimize the sol-gel process of the quaternary Cu2ZnSnS4 with good optical properties

    NASA Astrophysics Data System (ADS)

    Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe

    2016-07-01

    Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.

  20. Application of Taguchi technique coupled with grey relational analysis for multiple performance characteristics optimization of EDM parameters on ST 42 steel

    NASA Astrophysics Data System (ADS)

    Prayogo, Galang Sandy; Lusi, Nuraini

    2016-04-01

    The optimization technique of machining parameters considering multiple performance characteristics of non conventional machining EDM process using Taguchi method combined with grey relational analysis (GRA) is presented in this study. ST 42 steel was chosen as material work piece and graphite as electrode during this experiment. Performance characteristics such as material removal rate and overcut are selected to evaluated the effect of machining parameters. Current, pulse on time, pulse off time and discharging time/ Z down were selected as machining parameters. The experiments was conducted by varying that machining parameters in three different levels. Based on the Taguchi quality design concept, a L27 orthogonal array table was chosen for the experiments. By using the combination of GRA and Taguchi, the optimization of complicated multiple performance characteristics was transformed into the optimization of a single response performance index. Optimal levels of machining parameters were identified by using Grey Relational Analysis method. The statistical application of analysis of variance was used to determine the relatively significant machining parameters. The result of confirmation test indicted that the determined optimal combination of machining parameters effectively improve the performance characteristics of the machining EDM process on ST 42 steel.

  1. Design and experimental characterization of a multifrequency flexural ultrasonic actuator.

    PubMed

    Iula, Antonio

    2009-08-01

    In this work, a multifrequency flexural ultrasonic actuator is proposed, designed, and experimentally characterized. The actuator is composed of a Langevin transducer and of a displacement amplifier. The displacement amplifier is able to transform the almost flat axial displacement provided by the Langevin transducer at its back end into a flexural deformation that produces the maximum axial displacement at the center of its front end. Design and analysis of the actuator have been performed by using finite element method software. In analogy to classical power actuators that use sectional concentrators, the design criterion that has been followed was to design the Langevin transducer and the flexural amplifier separately at the same working frequency. As opposed to sectional concentrators, the flexural amplifier has several design parameters that allow a wide flexibility in the design. The flexural amplifier has been designed to produce a very high displacement amplification. It has also been designed in such a way that the whole actuator has 2 close working frequencies (17.4 kHz and 19.2 kHz), with similar flexural deformations of the front surface. A first prototype of the actuator has been manufactured and experimentally characterized to validate the numerical analysis. PMID:19686988

  2. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  3. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  4. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  5. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  6. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  7. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  8. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  9. Optimization of FS Welding Parameters for Improving Mechanical Behavior of AA2024-T351 Joints Based on Taguchi Method

    NASA Astrophysics Data System (ADS)

    Vidal, C.; Infante, V.

    2013-08-01

    In the present study, the design of an experiment technique, the Taguchi method, has been used to optimize the friction stir welding (FSW) parameters for improving mechanical behavior of AA2024-T351 joints. The parameters considered were vertical downward forging force, tool travel speed, and probe length. An orthogonal array of L9 (34) was used; ANOVA analyses were carried out to identify the significant factors affecting tensile strength (Global Efficiency to Tensile Strength—GETS), bending strength (Global Efficiency to Bending—GEB), and hardness field. The percentage contribution of each parameter was also determined. As a result of the Taguchi analysis in this study, the probe length is the most significant parameter on GETS, and the tool travel speed is the most important parameter affecting both the GEB and the hardness field. An algebraic model for predicting the best mechanical performance, namely fatigue resistance, was developed and the optimal FSW combination was determined using this model. The results obtained were validated by conducting confirmation tests, the results of which verify the adequacy and effectiveness of this approach.

  10. Development of a cell formation heuristic by considering realistic data using principal component analysis and Taguchi's method

    NASA Astrophysics Data System (ADS)

    Kumar, Shailendra; Sharma, Rajiv Kumar

    2015-12-01

    Over the last four decades of research, numerous cell formation algorithms have been developed and tested, still this research remains of interest to this day. Appropriate manufacturing cells formation is the first step in designing a cellular manufacturing system. In cellular manufacturing, consideration to manufacturing flexibility and production-related data is vital for cell formation. The consideration to this realistic data makes cell formation problem very complex and tedious. It leads to the invention and implementation of highly advanced and complex cell formation methods. In this paper an effort has been made to develop a simple and easy to understand/implement manufacturing cell formation heuristic procedure with considerations to the number of production and manufacturing flexibility-related parameters. The heuristic minimizes inter-cellular movement cost/time. Further, the proposed heuristic is modified for the application of principal component analysis and Taguchi's method. Numerical example is explained to illustrate the approach. A refinement in the results is observed with adoption of principal component analysis and Taguchi's method.

  11. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  12. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  13. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method.

    PubMed

    Chen, Zhengsuo; Deng, Hongbo; Chen, Can; Yang, Ying; Xu, Heng

    2014-01-01

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues.The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852

  14. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method

    PubMed Central

    2014-01-01

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues. The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852

  15. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at

  16. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  17. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  18. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design

    PubMed Central

    YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING

    2015-01-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279

  19. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  20. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  1. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  2. Design and Implementation of an Experimental Segway Model

    NASA Astrophysics Data System (ADS)

    Younis, Wael; Abdelati, Mohammed

    2009-03-01

    The segway is the first transportation product to stand, balance, and move in the same way we do. It is a truly 21st-century idea. The aim of this research is to study the theory behind building segway vehicles based on the stabilization of an inverted pendulum. An experimental model has been designed and implemented through this study. The model has been tested for its balance by running a Proportional Derivative (PD) algorithm on a microprocessor chip. The model has been identified in order to serve as an educational experimental platform for segways.

  3. New Design of Control and Experimental System of Windy Flap

    NASA Astrophysics Data System (ADS)

    Yu, Shanen; Wang, Jiajun; Chen, Zhangping; Sun, Weihua

    Experiments associated with control principle for automation major generally are based on MATLAB simulation, and they are not combined very well with the control objects. The experimental system aims to meets the teaching and studying requirements, provide experimental platform for learning the principle of automatic control, MCU, embedded system, etc. The main research contents contains design of angular surveying, control & drive module, and PC software. MPU6050 was used for angular surveying, PID control algorithm was used to control the flap go to the target angular, PC software was used for display, analysis, and processing.

  4. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  5. Experimental Study on Abrasive Waterjet Polishing of Hydraulic Turbine Blades

    NASA Astrophysics Data System (ADS)

    Khakpour, H.; Birglenl, L.; Tahan, A.; Paquet, F.

    2014-03-01

    In this paper, an experimental investigation is implemented on the abrasive waterjet polishing technique to evaluate its capability in polishing of surfaces and edges of hydraulic turbine blades. For this, the properties of this method are studied and the main parameters affecting its performance are determined. Then, an experimental test-rig is designed, manufactured and tested to be used in this study. This test-rig can be used to polish linear and planar areas on the surface of the desired workpieces. Considering the number of parameters and their levels, the Taguchi method is used to design the preliminary experiments. All experiments are then implemented according to the Taguchi L18 orthogonal array. The signal-to-noise ratios obtained from the results of these experiments are used to determine the importance of the controlled polishing parameters on the final quality of the polished surface. The evaluations on these ratios reveal that the nozzle angle and the nozzle diameter have the most important impact on the results. The outcomes of these experiments can be used as a basis to design a more precise set of experiments in which the optimal values of each parameter can be estimated.

  6. Optimization of microchannel heat sink using genetic algorithm and Taguchi method

    NASA Astrophysics Data System (ADS)

    Singh, Bhanu Pratap; Garg, Harry; Lall, Arun K.

    2016-04-01

    Active cooling using microchannel is a challenging area. The optimization and miniaturization of the devices is increasing the heat loads and affecting the operating performance of the system. The microchannel based cooling systems are widely used and overcomes most of the limitations of the existing solutions. Microchannels help in reducing dimensions and therefore finding many important applications in the microfluidics domain. The microchannel performance is related to the geometry, material and flow conditions. Optimized selection of controllable parameters is a key issue while designing the microchannel based cooling system. The proposed work presents a simulation based study according to Taguchi design of experiment with Reynolds number, aspect ratio and plenum length as input parameters to determine SN ratio. The objective of this study is to maximize the heat transfer. Mathematical models based on these parameters were developed which helps in global optimization using Genetic Algorithm. Genetic algorithm further employed to optimize the input parameters and generates global solution points for the proposed work. It was concluded that the optimized value for heat transfer coefficient and Nusselt number was 2620.888 W/m2K and 3.4708 as compare to values obtained through SN ratio based parametric study i.e. 2601.3687 W/m2K and 3.447 respectively. Hence an error of 0.744% and 0.68% was detected in heat transfer coefficient and Nusselt number respectively.

  7. Optimization of catalyst formation conditions for synthesis of carbon nanotubes using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pander, Adam; Hatta, Akimitsu; Furuta, Hiroshi

    2016-05-01

    A growth of Carbon Nanotubes (CNTs) suffers many difficulties in finding optimum growth parameters, reproducibility and mass-production, related to the large number of parameters influencing synthesis process. Choosing the proper parameters can be a time consuming process, and still may not give the optimal growth values. One of the possible solutions to decrease the number of the experiments, is to apply optimization methods to the design of the experiment parameter matrix. In this work, Taguchi method of designing experiments is applied to optimize the formation of iron catalyst during annealing process by analyzing average roughness and size of particles. The annealing parameters were: annealing time (tAN), hydrogen flow rate (fH2), temperature (TAN) and argon flow rate (fAr). Plots of signal-to-noise ratios showed that temperature and annealing time have the highest impact on final results of experiment. For more detailed study of the influence of parameters, the interaction plots of tested parameters were analyzed. For the final evaluation, CNT forests were grown on silicon substrates with AlOX/Fe catalyst by thermal chemical vapor deposition method. Based on obtained results, the average diameter of CNTs was decreased by 67% and reduced from 9.1 nm (multi-walled CNTs) to 3.0 nm (single-walled CNTs).

  8. Optimal active vibration absorber - Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  9. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  10. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  11. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  12. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  13. Computational procedures for optimal experimental design in biological systems.

    PubMed

    Balsa-Canto, E; Alonso, A A; Banga, J R

    2008-07-01

    Mathematical models of complex biological systems, such as metabolic or cell-signalling pathways, usually consist of sets of nonlinear ordinary differential equations which depend on several non-measurable parameters that can be hopefully estimated by fitting the model to experimental data. However, the success of this fitting is largely conditioned by the quantity and quality of data. Optimal experimental design (OED) aims to design the scheme of actuations and measurements which will result in data sets with the maximum amount and/or quality of information for the subsequent model calibration. New methods and computational procedures for OED in the context of biological systems are presented. The OED problem is formulated as a general dynamic optimisation problem where the time-dependent stimuli profiles, the location of sampling times, the duration of the experiments and the initial conditions are regarded as design variables. Its solution is approached using the control vector parameterisation method. Since the resultant nonlinear optimisation problem is in most of the cases non-convex, the use of a robust global nonlinear programming solver is proposed. For the sake of comparing among different experimental schemes, a Monte-Carlo-based identifiability analysis is then suggested. The applicability and advantages of the proposed techniques are illustrated by considering an example related to a cell-signalling pathway. PMID:18681746

  14. Fatigue of NiTi SMA–pulley system using Taguchi and ANOVA

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel–titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA–pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA–pulley actuators and SMA actuators without pulley. Within the sample of SMA–pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA–pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA–pulley configuration. This novel observation will be applicable to many areas of SMA–pulley system applications development.

  15. Optimization of the ASPN Process to Bright Nitriding of Woodworking Tools Using the Taguchi Approach

    NASA Astrophysics Data System (ADS)

    Walkowicz, J.; Staśkiewicz, J.; Szafirowicz, K.; Jakrzewski, D.; Grzesiak, G.; Stępniak, M.

    2013-02-01

    The subject of the research is optimization of the parameters of the Active Screen Plasma Nitriding (ASPN) process of high speed steel planing knives used in woodworking. The Taguchi approach was applied for development of the plan of experiments and elaboration of obtained experimental results. The optimized ASPN parameters were: process duration, composition and pressure of the gaseous atmosphere, the substrate BIAS voltage and the substrate temperature. The results of the optimization procedure were verified by the tools' behavior in the sharpening operation performed in normal industrial conditions. The ASPN technology proved to be extremely suitable for nitriding the woodworking planing tools, which because of their specific geometry, in particular extremely sharp wedge angles, could not be successfully nitrided using conventional direct current plasma nitriding method. The carried out research proved that the values of fracture toughness coefficient K Ic are in correlation with maximum spalling depths of the cutting edge measured after sharpening, and therefore may be used as a measure of the nitrided planing knives quality. Based on this criterion the optimum parameters of the ASPN process for nitriding high speed planing knives were determined.

  16. Plasma arc cutting optimization parameters for aluminum alloy with two thickness by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Abdulnasser, B.; Bhuvenesh, R.

    2016-07-01

    Manufacturing companies define the qualities of thermal removing process based on the dimension and physical appearance of the cutting material surface. The surface roughness of the cutting area for the material and the material removal rate being removed during the manual plasma arc cutting process were importantly considered. Plasma arc cutter machine model PS-100 was used to cut the specimens made from aluminium alloy 1100 manually based on the selected parameters setting. Two different thicknesses of specimens, 3mm and 6mm were used. The material removal rate (MRR) was measured by determining the difference between the weight of specimens before and after the cutting process. The surface roughness (Ra) was measured by using MITUTOYO CS-3100 machine and analysis was conducted to determine the average roughness (Ra) value, Taguchi method was utilized as an experimental layout to obtain MRR and Ra values. The results indicate that the current and cutting speed is the most significant parameters, followed by the arc gap for both rate of material removal and surface roughness.

  17. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  18. Validation of erythromycin microbiological assay using an alternative experimental design.

    PubMed

    Lourenço, Felipe Rebello; Kaneko, Telma Mary; Pinto, Terezinha de Jesus Andreoli

    2007-01-01

    The agar diffusion method, widely used in antibiotic dosage, relates the diameter of the inhibition zone to the dose of the substance assayed. An experimental plan is proposed that may provide better results and an indication of the assay validity. The symmetric or balanced assays (2 x 2) as well as those with interpolation in standard curve (5 x 1) are the main designs used in the dosage of antibiotics. This study proposes an alternative experimental design for erythromycin microbiological assay with the evaluation of the validation parameters of the method referring to linearity, precision, and accuracy. The design proposed (3 x 1) uses 3 doses of standard and 1 dose of sample applied in a unique plate, aggregating the characteristics of the 2 x 2 and 5 x 1 assays. The method was validated for erythromycin microbiological assay through agar diffusion, revealing its adequacy to linearity, precision, and accuracy standards. Likewise, the statistical methods used demonstrated their accordance with the method concerning the parameters evaluated. The 3 x 1 design proved to be adequate for the dosage of erythromycin and thus a good alternative for erythromycin assay. PMID:17760348

  19. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  20. Design and experimental validation of looped-tube thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Abduljalil, Abdulrahman S.; Yu, Zhibin; Jaworski, Artur J.

    2011-10-01

    The aim of this paper is to present the design and experimental validation process for a thermoacoustic looped-tube engine. The design procedure consists of numerical modelling of the system using DELTA EC tool, Design Environment for Low-amplitude ThermoAcoustic Energy Conversion, in particular the effects of mean pressure and regenerator configuration on the pressure amplitude and acoustic power generated. This is followed by the construction of a practical engine system equipped with a ceramic regenerator — a substrate used in automotive catalytic converters with fine square channels. The preliminary testing results are obtained and compared with the simulations in detail. The measurement results agree very well on the qualitative level and are reasonably close in the quantitative sense.

  1. The Concept of Fashion Design on the Basis of Color Coordination Using White LED Lighting

    NASA Astrophysics Data System (ADS)

    Mizutani, Yumiko; Taguchi, Tsunemasa

    This thesis focuses on the development of fashion design, especially a dress coordinated with White LED Lighting (=LED). As for the design concept a fusion of the advanced science and local culture was aimed for. For such a reason this development is a very experimental one. Here in particular I handled an Imperial Court dinner dress for the last Japanese First Lady, Mrs. Akie Abe who wore it at the Imperial Court dinner for the Indonesian First Couple held on November 2006 to. This dress made by Prof. T. Taguchi and I open up a new field in the dress design.

  2. Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.

    1992-01-01

    An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.

  3. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  4. Computational design and experimental verification of a symmetric protein homodimer.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Hsu, Fang-Ciao; Huang, Shing-Jong; Mayo, Stephen L

    2015-08-25

    Homodimers are the most common type of protein assembly in nature and have distinct features compared with heterodimers and higher order oligomers. Understanding homodimer interactions at the atomic level is critical both for elucidating their biological mechanisms of action and for accurate modeling of complexes of unknown structure. Computation-based design of novel protein-protein interfaces can serve as a bottom-up method to further our understanding of protein interactions. Previous studies have demonstrated that the de novo design of homodimers can be achieved to atomic-level accuracy by β-strand assembly or through metal-mediated interactions. Here, we report the design and experimental characterization of a α-helix-mediated homodimer with C2 symmetry based on a monomeric Drosophila engrailed homeodomain scaffold. A solution NMR structure shows that the homodimer exhibits parallel helical packing similar to the design model. Because the mutations leading to dimer formation resulted in poor thermostability of the system, design success was facilitated by the introduction of independent thermostabilizing mutations into the scaffold. This two-step design approach, function and stabilization, is likely to be generally applicable, especially if the desired scaffold is of low thermostability. PMID:26269568

  5. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  6. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  7. Optimizing conditions for production of high levels of soluble recombinant human growth hormone using Taguchi method.

    PubMed

    Savari, Marzieh; Zarkesh Esfahani, Sayyed Hamid; Edalati, Masoud; Biria, Davoud

    2015-10-01

    Human growth hormone (hGH) is synthesized and stored by somatotroph cells of the anterior pituitary gland and can effect on body metabolism. This protein can be used to treat hGH deficiency, Prader-Willi syndrome and Turner syndrome. The limitations in current technology for soluble recombinant protein production, such as inclusion body formation, decrease its usage for therapeutic purposes. To achieve high levels of soluble form of recombinant human growth hormone (rhGH) we used suitable host strain, appropriate induction temperature, induction time and culture media composition. For this purpose, 32 experiments were designed using Taguchi method and the levels of produced proteins in all 32 experiments were evaluated primarily by ELISA and dot blotting and finally the purified rhGH protein products assessed by SDS-PAGE and Western blotting techniques. Our results indicate that media, bacterial strains, temperature and induction time have significant effects on the production of rhGH. The low cultivation temperature of 25°C, TB media (with 3% ethanol and 0.6M glycerol), Origami strain and a 10-h induction time increased the solubility of human growth hormone. PMID:26151869

  8. Mahalanobis-Taguchi System to Identify Preindicators of Delirium in the ICU.

    PubMed

    Buenviaje, Bernardo; Bischoff, John E; Roncace, Robert A; Willy, Christopher J

    2016-07-01

    This paper was designed to determine if the Mahalanobis-Taguchi System (MTS) applied to the delirium-evidence-based bundle could detect medical patterns in retrospective datasets. The methodology defined the evidence-based bundle as a multidimensional system that conformed to a parameter diagram. The Mahalanobis distance (MD) was calculated for the retrospective healthy observations and the retrospective unhealthy observations. Signal-to-noise ratios were calculated to determine the relative strength of detection of 23 delirium preindicators. This study discovered that the sufficient variation in the CAM-ICU assessment, the standard for delirium assessment, would benefit from knowledge of how different the MD is from the healthy average. The sensitivity of the detection system was 0.89 with a 95% confidence interval of between 0.84 and 0.92. The specificity of the detection system was 0.93 with a 95% confidence interval between 0.90 and 0.95. The MTS applied to the delirium-evidence-based bundle could detect medical patterns in retrospective datasets. The implication of this paper to a biomedical research is an automated decision support tool for the delirium-evidence-based bundle providing an early detection capability needed today. PMID:26011872

  9. Laccase production by Coriolopsis caperata RCK2011: optimization under solid state fermentation by Taguchi DOE methodology.

    PubMed

    Nandal, Preeti; Ravella, Sreenivas Rao; Kuhad, Ramesh Chander

    2013-01-01

    Laccase production by Coriolopsis caperata RCK2011 under solid state fermentation was optimized following Taguchi design of experiment. An orthogonal array layout of L18 (2(1) × 3(7)) was constructed using Qualitek-4 software with eight most influensive factors on laccase production. At individual level pH contributed higher influence, whereas, corn steep liquor (CSL) accounted for more than 50% of the severity index with biotin and KH2PO4 at the interactive level. The optimum conditions derived were; temperature 30°C, pH 5.0, wheat bran 5.0 g, inoculum size 0.5 ml (fungal cell mass = 0.015 g dry wt.), biotin 0.5% w/v, KH2PO4 0.013% w/v, CSL 0.1% v/v and 0.5 mM xylidine as an inducer. The validation experiments using optimized conditions confirmed an improvement in enzyme production by 58.01%. The laccase production to the level of 1623.55 Ugds(-1) indicates that the fungus C. caperata RCK2011 has the commercial potential for laccase. PMID:23463372

  10. Laccase production by Coriolopsis caperata RCK2011: Optimization under solid state fermentation by Taguchi DOE methodology

    PubMed Central

    Nandal, Preeti; Ravella, Sreenivas Rao; Kuhad, Ramesh Chander

    2013-01-01

    Laccase production by Coriolopsis caperata RCK2011 under solid state fermentation was optimized following Taguchi design of experiment. An orthogonal array layout of L18 (21 × 37) was constructed using Qualitek-4 software with eight most influensive factors on laccase production. At individual level pH contributed higher influence, whereas, corn steep liquor (CSL) accounted for more than 50% of the severity index with biotin and KH2PO4 at the interactive level. The optimum conditions derived were; temperature 30°C, pH 5.0, wheat bran 5.0 g, inoculum size 0.5 ml (fungal cell mass = 0.015 g dry wt.), biotin 0.5% w/v, KH2PO4 0.013% w/v, CSL 0.1% v/v and 0.5 mM xylidine as an inducer. The validation experiments using optimized conditions confirmed an improvement in enzyme production by 58.01%. The laccase production to the level of 1623.55 Ugds−1 indicates that the fungus C. caperata RCK2011 has the commercial potential for laccase. PMID:23463372

  11. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  12. Taguchi methods applied to oxygen-enriched diesel engine experiments

    SciTech Connect

    Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J. ); Longman, D.E. )

    1992-01-01

    This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO[sub x] due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO[sub x] emissions for the six-cylinder engine than it was for the single-cylinder engine.

  13. Taguchi methods applied to oxygen-enriched diesel engine experiments

    SciTech Connect

    Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J.; Longman, D.E.

    1992-12-01

    This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO{sub x} due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO{sub x} emissions for the six-cylinder engine than it was for the single-cylinder engine.

  14. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  15. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  16. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  17. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  18. Design considerations for ITER (International Thermonuclear Experimental Reactor) magnet systems

    SciTech Connect

    Henning, C.D.; Miller, J.R.

    1988-10-09

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnetic systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs.

  19. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2015-11-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  20. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  1. Determining the extent of coarticulation: effects of experimental design.

    PubMed

    Gelfer, C E; Bell-Berti, F; Harris, K S

    1989-12-01

    The purpose of this letter is to explore some reasons for what appear to be conflicting reports regarding the nature and extent of anticipatory coarticulation, in general, and anticipatory lip rounding, in particular. Analyses of labial electromyographic and kinematic data using a minimal-pair paradigm allowed for the differentiation of consonantal and vocalic effects, supporting a frame versus a feature-spreading model of coarticulation. It is believed that the apparent conflicts of previous studies of anticipatory coarticulation might be resolved if experimental design made more use of contrastive minimal pairs and relied less on assumptions about feature specifications of phones. PMID:2600314

  2. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved. PMID:26597677

  3. An application of the Taguchi method to the development of a supplementary power source for the hybrid bicycle

    SciTech Connect

    Yamamoto, Hiroshi; Katsuoka, Tatsuzo; Igarashi, Nihaku; Koyama, Hiroyuki

    1995-12-31

    Yamaha Motor has developed and marketed a hybrid bicycle with an electric supplemental power source which generates assist power in proportion to the pedal torque by riders. The key function required for this assist power control system is that the variation of the assist ratio should be as small as possible over wide range of riding conditions. The assist power control system consists of mechanical and electrical components and requires a very tight quality control of each component if the design is to be robust to disturbances such as pedal torque or vehicle speed. The authors applied the Taguchi method to this development and succeeded in selecting the optimum combination of component levels in the system.

  4. Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using Taguchi approach.

    PubMed

    Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar

    2016-12-01

    Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using Taguchi orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by Taguchi orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes. PMID:27411334

  5. Design preferences and cognitive styles: experimentation by automated website synthesis

    PubMed Central

    2012-01-01

    Background This article aims to demonstrate computational synthesis of Web-based experiments in undertaking experimentation on relationships among the participants' design preference, rationale, and cognitive test performance. The exemplified experiments were computationally synthesised, including the websites as materials, experiment protocols as methods, and cognitive tests as protocol modules. This work also exemplifies the use of a website synthesiser as an essential instrument enabling the participants to explore different possible designs, which were generated on the fly, before selection of preferred designs. Methods The participants were given interactive tree and table generators so that they could explore some different ways of presenting causality information in tables and trees as the visualisation formats. The participants gave their preference ratings for the available designs, as well as their rationale (criteria) for their design decisions. The participants were also asked to take four cognitive tests, which focus on the aspects of visualisation and analogy-making. The relationships among preference ratings, rationale, and the results of cognitive tests were analysed by conservative non-parametric statistics including Wilcoxon test, Krustal-Wallis test, and Kendall correlation. Results In the test, 41 of the total 64 participants preferred graphical (tree-form) to tabular presentation. Despite the popular preference for graphical presentation, the given tabular presentation was generally rated to be easier than graphical presentation to interpret, especially by those who were scored lower in the visualization and analogy-making tests. Conclusions This piece of evidence helps generate a hypothesis that design preferences are related to specific cognitive abilities. Without the use of computational synthesis, the experiment setup and scientific results would be impractical to obtain. PMID:22748000

  6. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  7. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  8. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  9. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  10. Reduction of animal use: experimental design and quality of experiments.

    PubMed

    Festing, M F

    1994-07-01

    Poorly designed and analysed experiments can lead to a waste of scientific resources, and may even reach the wrong conclusions. Surveys of published papers by a number of authors have shown that many experiments are poorly analysed statistically, and one survey suggested that about a third of experiments may be unnecessarily large. Few toxicologists attempted to control variability using blocking or covariance analysis. In this study experimental design and statistical methods in 3 papers published in toxicological journals were used as case studies and were examined in detail. The first used dogs to study the effects of ethanol on blood and hepatic parameters following chronic alcohol consumption in a 2 x 4 factorial experimental design. However, the authors used mongrel dogs of both sexes and different ages with a wide range of body weights without any attempt to control the variation. They had also attempted to analyse a factorial design using Student's t-test rather than the analysis of variance. Means of 2 blood parameters presented with one decimal place had apparently been rounded to the nearest 5 units. It is suggested that this experiment could equally well have been done in 3 blocks using 24 instead of 46 dogs. The second case study was an investigation of the response of 2 strains of mice to a toxic agent causing bladder injury. The first experiment involved 40 treatment combinations (2 strains x 4 doses x 5 days) with 3-6 mice per combination. There was no explanation of how the experiment involving approximately 180 mice had actually been done, but unequal subclass numbers suggest that the experiment may have been done on an ad hoc basis rather than being properly designed. It is suggested that the experiment could have been done as 2 blocks involving 80 instead of about 180 mice. The third study again involved a factorial design with 4 dose levels of a compound and 2 sexes, with a total of 80 mice. Open field behaviour was examined. The author

  11. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  12. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  13. Heat treatment optimization of alumina/aluminum metal matrix composites using the Taguchi approach

    SciTech Connect

    Saigal, A.; Leisk, G. )

    1992-03-01

    The paper describes the use of the Taguchi approach for optimizing the heat treatment process of alumina-reinforced Al-6061 metal-matrix composites (MMCs). It is shown that the use of the Taguchi method makes it possible to test a great number of factors simultaneously and to provide a statistical data base that can be used for sensitivity and optimization studies. The results of plotting S/N values versus vol pct, solutionizing time, aging time, and aging temperature showed that the solutionizing time and the aging temperature significantly affect both the yield and the ultimate tensile strength of alumina/Al MMCs. 11 refs.

  14. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  15. A More Rigorous Quasi-Experimental Alternative to the One-Group Pretest-Posttest Design.

    ERIC Educational Resources Information Center

    Johnson, Craig W.

    1986-01-01

    A simple quasi-experimental design is described which may have utility in a variety of applied and laboratory research settings where ordinarily the one-group pretest-posttest pre-experimental design might otherwise be the procedure of choice. The design approaches the internal validity of true experimental designs while optimizing external…

  16. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  17. Design and experimental results of coaxial circuits for gyroklystron amplifiers

    SciTech Connect

    Flaherty, M.K.E.; Lawson, W.; Cheng, J.; Calame, J.P.; Hogan, B.; Latham, P.E.; Granatstein, V.L.

    1994-12-31

    At the University of Maryland high power microwave source development for use in linear accelerator applications continues with the design and testing of coaxial circuits for gyroklystron amplifiers. This presentation will include experimental results from a coaxial gyroklystron that was tested on the current microwave test bed, and designs for second harmonic coaxial circuits for use in the next generation of the gyroklystron program. The authors present test results for a second harmonic coaxial circuit. Similar to previous second harmonic experiments the input cavity resonated at 9.886 GHz and the output frequency was 19.772 GHz. The coaxial insert was positioned in the input cavity and drift region. The inner conductor consisted of a tungsten rod with copper and ceramic cylinders covering its length. Two tungsten rods that bridged the space between the inner and outer conductors supported the whole assembly. The tube produced over 20 MW of output power with 17% efficiency. Beam interception by the tungsten rods resulted in minor damage. Comparisons with previous non-coaxial circuits showed that the coaxial configuration increased the parameter space over which stable operation was possible. Future experiments will feature an upgraded modulator and beam formation system capable of producing 300 MW of beam power. The fundamental frequency of operation is 8.568 GHz. A second harmonic coaxial gyroklystron circuit was designed for use in the new system. A scattering matrix code predicts a resonant frequency of 17.136 GHz and Q of 260 for the cavity with 95% of the outgoing microwaves in the desired TE032 mode. Efficiency studies of this second harmonic output cavity show 20% expected efficiency. Shorter second harmonic output cavity designs are also being investigated with expected efficiencies near 34%.

  18. Focusing Kinoform Lenses: Optical Design and Experimental Validation

    SciTech Connect

    Alianelli, Lucia; Sawhney, Kawal J. S.; Snigireva, Irina; Snigirev, Anatoly

    2010-06-23

    X-ray focusing lenses with a kinoform profile are high brilliance optics that can produce nano-sized beams on 3rd generation synchrotron beamlines. The lenses are fabricated with sidewalls of micrometer lateral size. They are virtually non-absorbing and therefore can deliver a high flux over a good aperture. We are developing silicon and germanium lenses that will focus hard x-ray beams to less than 0.5 {mu}m size using a single refractive element. In this contribution, we present preliminary optical design and experimental test carried out on ID06 ESRF: the lenses were used to image directly the undulator source, providing a beam with fwhm of about 0.7 {mu}m.

  19. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  20. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  1. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  2. Experimental design considerations in microbiota/inflammation studies.

    PubMed

    Moore, Robert J; Stanley, Dragana

    2016-07-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  3. Experimental investigation of design parameters on dry powder inhaler performance.

    PubMed

    Ngoc, Nguyen Thi Quynh; Chang, Lusi; Jia, Xinli; Lau, Raymond

    2013-11-30

    The study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall. PMID:24055597

  4. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  5. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  6. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  7. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  8. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  9. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  10. Investigation and Parameter Optimization of a Hydraulic Ram Pump Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sarma, Dhrupad; Das, Monotosh; Brahma, Bipul; Pandwar, Deepak; Rongphar, Sermirlong; Rahman, Mafidur

    2016-06-01

    The main objective of this research work is to investigate the effect of Waste Valve height and Pressure Chamber height on the output flow rate of a Hydraulic ram pump. Also the second objective of this work is to optimize them for a hydraulic ram pump delivering water up to a height of 3.81 m (12.5 feet ) from the ground with a drive head (inlet head) of 1.86 m (6.11 feet). Two one-factor-at-a-time experiments have been conducted to decide the levels of the selected input parameters. After deciding the input parameters, an experiment has been designed using Taguchi's L9 Orthogonal Array with three repetitions. Analysis of Variance (ANOVA) is carried out to verify the significance of effect of the factors on the output flow rate of the pump. Results show that the height of the Waste Valve and height of the Pressure Chamber have significant effect on the outlet flow of the pump. For a pump of drive pipe diameter (inlet pipe) 31.75 mm (1.25 in.) and delivery pipe diameter of 12.7 mm (0.5 in.) the optimum setting was found out to be at a height of 114.3 mm (4.5 in.) of the Waste Valve and 406.4 mm (16 in.) of the Pressure vessel providing a delivery flow rate of 93.14 l per hour. For the same pump estimated range of output flow rate is, 90.65-94.97 l/h.

  11. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  12. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  13. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGESBeta

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  14. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  15. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    NASA Astrophysics Data System (ADS)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  16. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  17. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  18. Sequential experimental design approaches to helicopter rotor tuning

    NASA Astrophysics Data System (ADS)

    Wang, Shengda

    2005-07-01

    Two different approaches based on sequential experimental design concepts have been studied for helicopter rotor tuning, which is the process of adjusting the rotor blades so as to reduce the aircraft vibration and the spread of rotors. One uses an interval model adapted sequentially to improve the search for the blade adjustments. The other uses a probability model to search for the blade adjustments with the maximal probability of success. In the first approach, an interval model is used to represent the range of effect of blade adjustments on helicopter vibration, so as to cope with the nonlinear and stochastic nature of aircraft vibration. The coefficients of the model are initially defined according to sensitivity coefficients between the blade adjustments and helicopter vibration, to include the expert knowledge of the process. The model coefficients are subsequently transformed into intervals and updated after each tuning iteration to improve the model's estimation accuracy. The search for the blade adjustments is performed according to this model by considering the vibration estimates of all of the flight regimes so as to provide a comprehensive solution for rotor tuning. The second approach studied uses a probability model to maximize the likelihood of success of the selected blade adjustments. The underlying model in this approach consists of two segments: a deterministic segment to include a linear regression model representing the relationships between the blade adjustments and helicopter vibration, and a stochastic segment to comprise probability densities of the vibration components. The blade adjustments with the maximal probability of generating acceptable vibration are selected as recommended adjustments. The effectiveness of the proposed approaches is evaluated in simulation based on a series of neural networks trained with actual vibration data. To incorporate the stochastic behavior of the helicopter vibration and better simulate the tuning

  19. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  20. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. PMID:26719136

  1. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  2. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  3. Design considerations and experimental analysis for silicon carbide power rectifiers

    NASA Astrophysics Data System (ADS)

    Khemka, V.; Patel, R.; Chow, T. P.; Gutmann, R. J.

    1999-10-01

    In this paper we present the investigation of properties of silicon carbide power rectifiers, in particular Schottky, PiN and advanced hybrid power rectifiers such as the trench MOS barrier Schottky rectifier. Analysis of the forward, reverse and switching experimental characteristics are presented and these silicon carbide rectifiers are compared to silicon devices. Silicon carbide Schottky rectifiers are attractive for applications requiring blocking voltage in excess of 100 V as the use of Si is precluded by its large specific on-resistance. Analysis of power dissipation indicates that silicon carbide Schottky rectifiers offer significant improvement over silicon counterparts. Silicon carbide junction rectifiers, on the other hand, are superior to silicon counterparts only for blocking voltage greater than 2000 V. Performance of acceptor (boron) and donor (phosphorus) implanted experimental silicon carbide junction rectifiers are presented and compared. Some of the recent developments in silicon carbide rectifiers have been described and compared with theory and our experimental results. The well established silicon rectifiers theory are often inadequate to describe the characteristics of the experimental silicon carbide junction rectifiers and appropriate generalization of these theories are presented. Experimental trench MOS barrier Schottky rectifiers (TMBS) have demonstrated significant improvement in leakage current compared to planar Schottky devices. Performance of current state-of-the-art silicon carbide rectifiers are far from theoretical predictions. Availability of high-quality silicon carbide crystals is crucial to successful realization of these performance projections.

  4. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  5. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed experiment (a video…

  6. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  7. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  8. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  9. Optimization of Physical Working Environment Setting to Improve Productivity and Minimize Error by Taguchi and VIKOR Methods

    NASA Astrophysics Data System (ADS)

    Ilma Rahmillah, Fety

    2016-01-01

    The working environment is one factor that has contribution to the worker's performance, especially for continuous and monotonous works. L9 Taguchi design experiment for inner array is used to design the experiment which was carried out in laboratory whereas L4 is for outer array. Four control variables with three levels of each are used to get the optimal combination of working environment setting. Four responses are also measured to know the effect of four control factors. Results shown that by using ANOVA, the effect of illumination, temperature, and instrumental music to the number of ouput, number of error, and rating perceived discomfort is significant with the total variance explained of 54,67%, 60,67%, and 75,22% respectively. By using VIKOR method, it yields the optimal combination of experiment 66 with the setting condition of A3-B2-C1-D3. The illumination is 325-350 lux, temperature is 240-260C, fast category of instrumental music, and 70-80 dB for intensity of the music being played.

  10. Music and video iconicity: theory and experimental design.

    PubMed

    Kendall, Roger A

    2005-01-01

    Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal syntax of accent structures in iconicity is hypothesized to be important. Beyond that, a multidimensional visual space connects to musical patterning through mapping of visual time/space to musical time/magnitudes. Experimental visual and musical correlates are presented and comparisons to previous research provided. PMID:15684561

  11. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  12. Evaluation Design: New York State Experimental Prekindergarten Program.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Child Development and Parent Education.

    In order to expose disadvantaged preschool children to a variety of educational experiences and to health and social services, the New York State Legislature funded the State Experimental Prekindergarten Program (PreK). In 1975, a five-year longitudinal evaluation study was begun. The study has two major parts: (1) a general study of 5,800…

  13. Association mapping: critical considerations shift from genotyping to experimental design

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The goal of many plant scientists’ research is to explain natural phenotypic variation in terms of simple changes in DNA sequence. Traditionally, linkage mapping has been the most commonly employed method to reach this goal: experimental crosses are made to generate a family with known relatedness ...

  14. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  15. Experimental design for research on shock-turbulence interaction

    NASA Technical Reports Server (NTRS)

    Radcliffe, S. W.

    1969-01-01

    Report investigates the production of acoustic waves in the interaction of a supersonic shock and a turbulence environment. The five stages of the investigation are apparatus design, development of instrumentation, preliminary experiment, turbulence generator selection, and main experiments.

  16. International Thermonuclear Experimental Reactor (ITER) neutral beam design

    SciTech Connect

    Myers, T.J.; Brook, J.W.; Spampinato, P.T.; Mueller, J.P.; Luzzi, T.E.; Sedgley, D.W. . Space Systems Div.)

    1990-10-01

    This report discusses the following topics on ITER neutral beam design: ion dump; neutralizer and module gas flow analysis; vacuum system; cryogenic system; maintainability; power distribution; and system cost.

  17. Experimental launcher facility - ELF-I: Design and operation

    NASA Astrophysics Data System (ADS)

    Deis, D. W.; Ross, D. P.

    1982-01-01

    In order to investigate the general area of ultra-high-current density, high-velocity sliding contacts as applied to electromagnetic launcher armatures, a small experimental launcher, ELF-I, has been developed, and preliminary experiments have been performed. The system uses a 36 kJ, 5 kV capacitor bank as a primary pulse power source. When used in conjunction with a 5-microhenry pulse conditioning coil, a 100-kA peak current and 10-ms-wide pulse is obtained. A three-station 150 kV flash X-ray system is operational for obtaining in-bore photographs of the projectiles. Experimental results obtained for both metal and plasma armatures at sliding velocities of up to 1 km/s are discussed with emphasis on armature-rail interactions.

  18. Teaching Simple Experimental Design to Undergraduates: Do Your Students Understand the Basics?

    ERIC Educational Resources Information Center

    Hiebert, Sara M.

    2007-01-01

    This article provides instructors with guidelines for teaching simple experimental design for the comparison of two treatment groups. Two designs with specific examples are discussed along with common misconceptions that undergraduate students typically bring to the experiment design process. Features of experiment design that maximize power and…

  19. Optimization of experimental designs by incorporating NIF facility impacts

    NASA Astrophysics Data System (ADS)

    Eder, D. C.; Whitman, P. K.; Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T.; Parham, T. G.; Koerner, J. G.; Dixit, S. N.; Suratwala, T. I.; Blue, B. E.; Hansen, J. F.; Tobin, M. T.; Robey, H. F.; Spaeth, M. L.; MacGowan, B. J.

    2006-06-01

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) blocks the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, fast moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to assure that all NIF experimental campaigns meet the requirements on allowed level of debris and shrapnel generation.

  20. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model. PMID:26575545

  1. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers.

    PubMed

    Eriksson, Tobias J R; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio ( SNR ) ≃ 15 dB in transmit-receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  2. Using a hybrid approach to optimize experimental network design for aquifer parameter identification.

    PubMed

    Chang, Liang-Cheng; Chu, Hone-Jay; Lin, Yu-Pin; Chen, Yu-Wen

    2010-10-01

    This research develops an optimum design model of groundwater network using genetic algorithm (GA) and modified Newton approach, based on the experimental design conception. The goal of experiment design is to minimize parameter uncertainty, represented by the covariance matrix determinant of estimated parameters. The design problem is constrained by a specified cost and solved by GA and a parameter identification model. The latter estimates optimum parameter value and its associated sensitivity matrices. The general problem is simplified into two classes of network design problems: an observation network design problem and a pumping network design problem. Results explore the relationship between the experimental design and the physical processes. The proposed model provides an alternative to solve optimization problems for groundwater experimental design. PMID:19757116

  3. Engineering design of a throat valve experimental facility

    NASA Astrophysics Data System (ADS)

    Osofsky, Irving B.; Hove, Duane T.; Derbes, William C.

    1995-06-01

    This report covers the design of a gas dynamic test facility. The facility studied is a medium-scale blast simulator. The primary use of the facility would be to test fast-acting, computer-controlled valves. The valve would be used to control nuclear blast simulation by controlling the release of high pressure gas from drivers into an expansion tunnel to form a shock wave. The development of the valves themselves is reported elsewhere. The facility is composed of a heated gas supply, driver tube, expansion tunnel, reaction pier, piping, sensors, and controls. The driver tube and heated gas supply are existing components. The expansion tunnel, piping, sensors, and controls are all new components. Much of the report is devoted to the design of the reaction pier and the development of heat transfer relations used in designing the piping and controls.

  4. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  5. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  6. Design and experimental validation of a compact collimated Knudsen source.

    PubMed

    Wouters, Steinar H W; Ten Haaf, Gijs; Mutsaers, Peter H A; Vredenbregt, Edgar J D

    2016-08-01

    In this paper, the design and performance of a collimated Knudsen source, which has the benefit of a simple design over recirculating sources, is discussed. Measurements of the flux, transverse velocity distribution, and brightness of the resulting rubidium beam at different source temperatures were conducted to evaluate the performance. The scaling of the flux and brightness with the source temperature follows the theoretical predictions. The transverse velocity distribution in the transparent operation regime also agrees with the simulated data. The source was tested up to a temperature of 433 K and was able to produce a flux in excess of 10(13) s(-1). PMID:27587111

  7. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  8. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  9. EXPERIMENTAL DESIGN AND INSTRUMENTATION FOR A FIELD EXPERIMENT

    EPA Science Inventory

    This report concerns the design of a field experiment for a military setting in which the effects of carbon monoxide on neurobehavioral variables are to be studied. ield experiment is distinguished from a survey by the fact that independent variables are manipulated, just as in t...

  10. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  11. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  12. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  13. 2-[(Hydroxymethyl)amino]ethanol in water as a preservative: Study of formaldehyde released by Taguchi's method

    NASA Astrophysics Data System (ADS)

    Wisessirikul, W.; Loykulnant, S.; Montha, S.; Fhulua, T.; Prapainainar, P.

    2016-06-01

    This research studied the quantity of free formaldehyde released from 2- [(hydroxymethyl)amino]ethanol (HAE) in DI water and natural rubber latex mixture using high-performance liquid chromatography (HPLC) technique. The quantity of formaldehyde retained in the solution was cross-checked by using titration technique. The investigated factors were the concentration of preservative (HAE), pH, and temperature. Taguchi's method was used to design the experiments. The number of experiments was reduced to 16 experiments from all possible experiments by orthogonal arrays (3 factors and 4 levels in each factor). Minitab program was used as a tool for statistical calculation and for finding the suitable condition for the preservative system. HPLC studies showed that higher temperature and higher concentration of the preservative influence the amount of formaldehyde released. It was found that conditions at which formaldehyde was released in the lowest amount were 1.6%w/v HAE, 4 to 40 °C, and the original pH. Nevertheless, the pH value of NR latex should be more than 10 (the suitable pH value was found to be 13). This preservative can be used to replace current preservative systems and can maintain the quality of latex for long-term storage. Use of the proposed preservative system was also shown to have reduced impact on the toxicity of the environment.

  14. Design and modeling considerations for experimental railgun armatures

    NASA Astrophysics Data System (ADS)

    Sink, D. A.; Krzastek, L. J.

    1991-01-01

    A calculational model for obtaining detailed armature parameters associated with railgun launches has been developed. Calculated parameters are obtained for device features and operating conditions supplied as input parameters. The model was validated by reproducing several sets of experimental data from a variety of devices. Model parameters associated with armature mass loss and plasma axial profiles were obtained as part of anchoring the calculations. The data included complete sets of dynamics, armature lengths, and muzzle voltages for each case studied. From the calculations, several differences between the various types of armatures (i.e., solid, hybrids, and plasma) and bore sizes were identified and found to account for the resulting performance features.

  15. Tocorime Apicu: design and validation of an experimental search engine

    NASA Astrophysics Data System (ADS)

    Walker, Reginald L.

    2001-07-01

    In the development of an integrated, experimental search engine, Tocorime Apicu, the incorporation and emulation of the evolutionary aspects of the chosen biological model (honeybees) and the field of high-performance knowledge discovery in databases results in the coupling of diverse fields of research: evolutionary computations, biological modeling, machine learning, statistical methods, information retrieval systems, active networks, and data visualization. The use of computer systems provides inherent sources of self-similarity traffic that result from the interaction of file transmission, caching mechanisms, and user-related processes. These user-related processes are initiated by the user, application programs, or the operating system (OS) for the user's benefit. The effect of Web transmission patterns, coupled with these inherent sources of self-similarity associated with the above file system characteristics, provide an environment for studying network traffic. The goal of the study was client-based, but with no user interaction. New methodologies and approaches were needed as network packet traffic increased in the LAN, LAN+WAN, and WAN. Statistical tools and methods for analyzing datasets were used to organize data captured at the packet level for network traffic between individual source/destination pairs. Emulation of the evolutionary aspects of the biological model equips the experimental search engine with an adaptive system model which will eventually have the capability to evolve with an ever- changing World Wide Web environment. The results were generated using a LINUX OS.

  16. Creating A Data Base For Design Of An Impeller

    NASA Technical Reports Server (NTRS)

    Prueger, George H.; Chen, Wei-Chung

    1993-01-01

    Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.

  17. High-power CMUTs: design and experimental verification.

    PubMed

    Yamaner, F Yalçin; Olçum, Selim; Oğuz, H Kağan; Bozkurt, Ayhan; Köymen, Hayrettin; Atalar, Abdullah

    2012-06-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have great potential to compete with piezoelectric transducers in high-power applications. As the output pressures increase, nonlinearity of CMUT must be reconsidered and optimization is required to reduce harmonic distortions. In this paper, we describe a design approach in which uncollapsed CMUT array elements are sized so as to operate at the maximum radiation impedance and have gap heights such that the generated electrostatic force can sustain a plate displacement with full swing at the given drive amplitude. The proposed design enables high output pressures and low harmonic distortions at the output. An equivalent circuit model of the array is used that accurately simulates the uncollapsed mode of operation. The model facilities the design of CMUT parameters for high-pressure output, without the intensive need for computationally involved FEM tools. The optimized design requires a relatively thick plate compared with a conventional CMUT plate. Thus, we used a silicon wafer as the CMUT plate. The fabrication process involves an anodic bonding process for bonding the silicon plate with the glass substrate. To eliminate the bias voltage, which may cause charging problems, the CMUT array is driven with large continuous wave signals at half of the resonant frequency. The fabricated arrays are tested in an oil tank by applying a 125-V peak 5-cycle burst sinusoidal signal at 1.44 MHz. The applied voltage is increased until the plate is about to touch the bottom electrode to get the maximum peak displacement. The observed pressure is about 1.8 MPa with -28 dBc second harmonic at the surface of the array. PMID:22718878

  18. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  19. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  20. Process designed for experimentation for increased-caliper Fresnel lenses

    SciTech Connect

    Zderad, A.J.

    1992-04-01

    The feasibility of producing increased caliper linear and point focus Fresnel lenses in a continuous sheet is described. Both a 8.16-inch-square radial 2 {times} 7 parquet, and a 22-inch-wide linear lens were produced at .11-inch in caliper. The primary purpose of this experimentation is to determine the replication effectiveness and production rate of the polymeric web process at increased thickness. The results demonstrated that both radial and linear lenses, at increased caliper, can be replicated with performance comparable to that of the current state-of-the-art 3M laminated lenses; however, the radial parquets were bowed on the edges. Additional process development is necessary to solve this problem. Current estimates are that the .11-inch caliper parquets cost significantly more than customer laminated parquets using 0.022-inch thick lensfilm.

  1. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  2. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  3. Analysis, design and experimental characterization of electrostatically actuated gas micropumps

    NASA Astrophysics Data System (ADS)

    Astle, Aaron A.

    This work goal is to realize a high-performance, multi-stage micropump integrated within a wireless micro gas chromatograph (muGC) for measuring airborne environment pollutants. The work described herein focuses on the development of high-fidelity mathematical and physical design models, and the testing and validation of the most promising models with large-scale and micro-scale (MEMS) pump prototypes. It is shown that an electrostatically-actuated, multistage, diaphragm micropump with active valve control provides the best expected performance for this application. A hierarchy of models is developed to characterize the various factors governing micropump performance. This includes a thermodynamic model, an idealized reduced-order model and a reduced-order model that incorporates realistic valve flow effects and accounts for fluidic load. The reduced-order models are based on fundamental fluid dynamic principles and allow predictions of flow rate and pressure rise as a function of geometric design variables, and drive signal. The reduced order models are validated in several tests. Two-stage, 20x scale pump results reveal the need to incorporate realistic valve flow effects and the output load for accurate modeling. The more realistic reduced order model is then validated using micropumps with two and four pumping stages. The reduced order model captures the micropump performance accurately, provided that separate measurements of valve pressure losses and pump geometry are used. The four-stage micropump fabricated using theoretical model guidelines from this research provides a maximum flow rate and pressure rise of 3 cm 3/min and 1.75 kPa/stage respectively with a power consumption of only 4 mW per stage. The four-stage micropump occupies and area of 54 mm 2. Each pumping cavity has a volume of 6x10-6 m 3. This performance indicates that this pump design will be sufficient to meet the requirements for extended field operation of a wireless integrated muGC. During

  4. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  5. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation.

    PubMed

    Zhang, Haichong K; Cheng, Alexis; Bottenus, Nick; Guo, Xiaoyu; Trahey, Gregg E; Boctor, Emad M

    2016-04-01

    Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality. PMID:27088108

  6. Improved field experimental designs and quantitative evaluation of aquatic ecosystems

    SciTech Connect

    McKenzie, D.H.; Thomas, J.M.

    1984-05-01

    The paired-station concept and a log transformed analysis of variance were used as methods to evaluate zooplankton density data collected during five years at an electrical generation station on Lake Michigan. To discuss the example and the field design necessary for a valid statistical analysis, considerable background is provided on the questions of selecting (1) sampling station pairs, (2) experimentwise error rates for multi-species analyses, (3) levels of Type I and II error rates, (4) procedures for conducting the field monitoring program, and (5) a discussion of the consequences of violating statistical assumptions. Details for estimating sample sizes necessary to detect changes of a specified magnitude are included. Both statistical and biological problems with monitoring programs (as now conducted) are addressed; serial correlation of successive observations in the time series obtained was identified as one principal statistical difficulty. The procedure reduces this problem to a level where statistical methods can be used confidently. 27 references, 4 figures, 2 tables.

  7. Experimental-design techniques in reliability-growth assessment

    NASA Astrophysics Data System (ADS)

    Benski, H. C.; Cabau, Emmanuel

    Several recent statistical methods, including a Bayesian technique, have been proposed to detect the presence of significant effects in unreplicated factorials. It is recognized that these techniques were developed for s-normally distributed responses; and this may or may not be the case for times between failures. In fact, for homogeneous Poisson processes (HPPs), these times are exponentially distributed. Still, response data transformations can be applied to these times so that, at least approximately, these procedures can be used. It was therefore considered important to determine how well these different techniques performed in terms of power. The results of an extensive Monte Carlo simulation are presented in which the power of techniques is analyzed. The actual details of a fractional factorial design applied in the context of reliability growth are described. Finally, power comparison results are presented.

  8. Case study 1. Practical considerations with experimental design and interpretation.

    PubMed

    Barr, John T; Flora, Darcy R; Iwuchukwu, Otito F

    2014-01-01

    At some point, anyone with knowledge of drug metabolism and enzyme kinetics started out knowing little about these topics. This chapter was specifically written with the novice in mind. Regardless of the enzyme one is working with or the goal of the experiment itself, there are fundamental components and concepts of every experiment using drug metabolism enzymes. The following case studies provide practical tips, techniques, and answers to questions that may arise in the course of conducting such experiments. Issues ranging from assay design and development to data interpretation are addressed. The goal of this section is to act as a starting point to provide the reader with key questions and guidance while attempting his/her own work. PMID:24523122

  9. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  10. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  11. Recent developments in optimal experimental designs for functional magnetic resonance imaging

    PubMed Central

    Kao, Ming-Hung; Temkit, M'hamed; Wong, Weng Kee

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is one of the leading brain mapping technologies for studying brain activity in response to mental stimuli. For neuroimaging studies utilizing this pioneering technology, there is a great demand of high-quality experimental designs that help to collect informative data to make precise and valid inference about brain functions. This paper provides a survey on recent developments in experimental designs for fMRI studies. We briefly introduce some analytical and computational tools for obtaining good designs based on a specified design selection criterion. Research results about some commonly considered designs such as blocked designs, and m-sequences are also discussed. Moreover, we present a recently proposed new type of fMRI designs that can be constructed using a certain type of Hadamard matrices. Under certain assumptions, these designs can be shown to be statistically optimal. Some future research directions in design of fMRI experiments are also discussed. PMID:25071884

  12. Patient reactions to personalized medicine vignettes: An experimental design

    PubMed Central

    Butrick, Morgan; Roter, Debra; Kaphingst, Kimberly; Erby, Lori H.; Haywood, Carlton; Beach, Mary Catherine; Levy, Howard P.

    2011-01-01

    Purpose Translational investigation on personalized medicine is in its infancy. Exploratory studies reveal attitudinal barriers to “race-based medicine” and cautious optimism regarding genetically personalized medicine. This study describes patient responses to hypothetical conventional, race-based, or genetically personalized medicine prescriptions. Methods Three hundred eighty-seven participants (mean age = 47 years; 46% white) recruited from a Baltimore outpatient center were randomized to this vignette-based experimental study. They were asked to imagine a doctor diagnosing a condition and prescribing them one of three medications. The outcomes are emotional response to vignette, belief in vignette medication efficacy, experience of respect, trust in the vignette physician, and adherence intention. Results Race-based medicine vignettes were appraised more negatively than conventional vignettes across the board (Cohen’s d = −0.51−0.57−0.64, P < 0.001). Participants rated genetically personalized comparably with conventional medicine (− 0.14−0.15−0.17, P = 0.47), with the exception of reduced adherence intention to genetically personalized medicine (Cohen’s d = −0.38−0.41−0.44, P = 0.009). This relative reluctance to take genetically personalized medicine was pronounced for racial minorities (Cohen’s d =−0.38−0.31−0.25, P = 0.02) and was related to trust in the vignette physician (change in R2 = 0.23, P < 0.001). Conclusions This study demonstrates a relative reluctance to embrace personalized medicine technology, especially among racial minorities, and highlights enhancement of adherence through improved doctor-patient relationships. PMID:21270639

  13. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  14. A modified experimental hut design for studying responses of disease-transmitting mosquitoes to indoor interventions: the Ifakara experimental huts.

    PubMed

    Okumu, Fredros O; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  15. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  16. Estimating intervention effects across different types of single-subject experimental designs: empirical illustration.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S Natasha; Van den Noortgate, Wim

    2015-03-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs. PMID:24884449

  17. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  18. Parametric design methodology for chemical processes using a simulator

    SciTech Connect

    Diwekar, U.M.; Rubin, E.S. )

    1994-02-01

    Parameter design is a method popularized by the Japanese quality expert G. Taguchi, for designing products and manufacturing processes that are robust in the face of uncontrollable variations. At the design stage, the goal of parameter design is to identify design settings that make the product performance less sensitive to the effects of manufacturing and environmental variations and deterioration. Because parameter design reduces performance variation by reducing the influence of the sources of variation rather than by controlling them, it is a cost-effective technique for improving quality. A recent study on the application of parameter design methodology for chemical processes reported that the use of Taguchi's method was not justified and a method based on Monte Carlo simulation combined with optimization was shown to be more effective. However, this method is computationally intensive as a large number of samples are necessary to achieve the given accuracy. Additionally, determination of the number of sample runs required is based on experimentation due to a lack of systematic sampling methods. In an attempt to overcome these problems, the use of a stochastic modeling capability combined with an optimizer is presented in this paper. The objective is that of providing an effective means for application of parameter design methodologies to chemical processes using the ASPEN simulator. This implementation not only presents a generalized tool for use by chemical engineers at large but also provides systematic estimates of the number of sample runs required to attain the specified accuracy. The stochastic model employs the technique of Latin hypercube sampling instead of the traditional Monte Carlo technique and hence has a great potential to reduce the required number of samples. The methodology is illustrated via an example problem of designing a chemical process.

  19. Designing reduced-order linear multivariable controllers using experimentally derived plant data

    NASA Technical Reports Server (NTRS)

    Frazier, W. G.; Irwin, R. D.

    1993-01-01

    An iterative numerical algorithm for simultaneously improving multiple performance and stability robustness criteria for multivariable feedback systems is developed. The unsatisfied design criteria are improved by updating the free parameters of an initial, stabilizing controller's state-space matrices. Analytical expressions for the gradients of the design criteria are employed to determine a parameter correction that improves all of the feasible, unsatisfied design criteria at each iteration. A controller design is performed using the algorithm with experimentally derived data from a large space structure test facility. Experimental results of the controller's performance at the facility are presented.

  20. Optimization of experimental designs and model parameters exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schürch, M.; Slawig, T.

    2014-09-01

    The weighted least squares estimator for model parameters was presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs was described together with a lesser known approach which takes into account a potential nonlinearity of the model parameters. These two approaches were combined with two different methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and handling was described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two models for sediment concentration in seawater of different complexity served as application example. The advantages and disadvantages of the different approaches were compared, and an evaluation of the approaches was performed.

  1. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    NASA Astrophysics Data System (ADS)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2016-06-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  2. Neuroimaging in aphasia treatment research: Issues of experimental design for relating cognitive to neural changes

    PubMed Central

    Rapp, Brenda; Caplan, David; Edwards, Susan; Visch-Brink, Evy; Thompson, Cynthia K.

    2012-01-01

    The design of functional neuroimaging studies investigating the neural changes that support treatment-based recovery of targeted language functions in acquired aphasia faces a number of challenges. In this paper, we discuss these challenges and focus on experimental tasks and experimental designs that can be used to address the challenges, facilitate the interpretation of results and promote integration of findings across studies. PMID:22974976

  3. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  4. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  5. Teaching simple experimental design to undergraduates: do your students understand the basics?

    PubMed

    Hiebert, Sara M

    2007-03-01

    This article provides instructors with guidelines for teaching simple experimental design for the comparison of two treatment groups. Two designs with specific examples are discussed along with common misconceptions that undergraduate students typically bring to the experiment design process. Features of experiment design that maximize power and minimize the effects of interindividual variation, thus allowing reduction of sample sizes, are described. Classroom implementation that emphasizes student-centered learning is suggested, and thought questions, designed to help students discover and name the basic principles of simple experiment design for themselves, are included with an answer key. PMID:17327588

  6. Design studies for the transmission simulator method of experimental dynamic substructuring.

    SciTech Connect

    Mayes, Randall Lee; Arviso, Michael

    2010-05-01

    In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.

  7. Applying the Taguchi method to optimize sumatriptan succinate niosomes as drug carriers for skin delivery.

    PubMed

    González-Rodríguez, Maria Luisa; Mouram, Imane; Cózar-Bernal, Ma Jose; Villasmil, Sheila; Rabasco, Antonio M

    2012-10-01

    Niosomes formulated from different nonionic surfactants (Span® 60, Brij® 72, Span® 80, or Eumulgin® B 2) with cholesterol (CH) molar ratios of 3:1 or 4:1 with respect to surfactant were prepared with different sumatriptan amount (10 and 15 mg) and stearylamine (SA). Thin-film hydration method was employed to produce the vesicles, and the time lapsed to hydrate the lipid film (1 or 24 h) was introduced as variable. These factors were selected as variables and their levels were introduced into two L18 orthogonal arrays. The aim was to optimize the manufacturing conditions by applying Taguchi methodology. Response variables were vesicle size, zeta potential (Z), and drug entrapment. From Taguchi analysis, drug concentration and the time until the hydration were the most influencing parameters on size, being the niosomes made with Span® 80 the smallest vesicles. The presence of SA into the vesicles had a relevant influence on Z values. All the factors except the surfactant-CH ratio had an influence on the encapsulation. Formulations were optimized by applying the marginal means methodology. Results obtained showed a good correlation between mean and signal-to-noise ratio parameters, indicating the feasibility of the robust methodology to optimize this formulation. Also, the extrusion process exerted a positive influence on the drug entrapment. PMID:22806266

  8. Optimization of fast disintegration tablets using pullulan as diluent by central composite experimental design.

    PubMed

    Patel, Dipil; Chauhan, Musharraf; Patel, Ravi; Patel, Jayvadan

    2012-03-01

    The objective of this work was to apply central composite experimental design to investigate main and interaction effect of formulation parameters in optimizing novel fast disintegration tablets formulation using pullulan as diluents. Face centered central composite experimental design was employed to optimize fast disintegration tablet formulation. The variables studied were concentration of diluents (pullulan, X(1)), superdisintigrant (sodium starch glycolate, X(2)), and direct compression aid (spray dried lactose, X(3)). Tablets were characterized for weight variation, thickness, disintegration time (Y(1)) and hardness (Y(2)). Good correlation between the predicted values and experimental data of the optimized formulation methodology in optimizing fast disintegrating tablets using pullulan as a diluent. PMID:23066220

  9. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  10. Using an Experimental Design--Just the Thing for That Rainy Day.

    ERIC Educational Resources Information Center

    Donlan, Dan

    1986-01-01

    Presents "fail-safe" lessons for emergencies and substitutes. Describes an experimental design with six steps, designed to help teachers teach students some things about themselves, such as whether boys are better spellers than girls. Offers other examples from the classroom. (EL)

  11. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  12. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  13. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  14. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  15. Experimental designs and their recent advances in set-up, data interpretation, and analytical applications.

    PubMed

    Dejaegher, Bieke; Heyden, Yvan Vander

    2011-09-10

    In this review, the set-up and data interpretation of experimental designs (screening, response surface, and mixture designs) are discussed. Advanced set-ups considered are the application of D-optimal and supersaturated designs as screening designs. Advanced data interpretation approaches discussed are an adaptation of the algorithm of Dong and the estimation of factor effects from supersaturated design results. Finally, some analytical applications in separation science, on the one hand, and formulation-, product-, or process optimization, on the other, are discussed. PMID:21632194

  16. Optimization of experimental design in fMRI: a general framework using a genetic algorithm.

    PubMed

    Wager, Tor D; Nichols, Thomas E

    2003-02-01

    This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184

  17. Process optimization for Ni(II) removal from wastewater by calcined oyster shell powders using Taguchi method.

    PubMed

    Yen, Hsing Yuan; Li, Jun Yan

    2015-09-15

    Waste oyster shells cause great environmental concerns and nickel is a harmful heavy metal. Therefore, we applied the Taguchi method to take care of both issues by optimizing the controllable factors for Ni(II) removal by calcined oyster shell powders (OSP), including the pH (P), OSP calcined temperature (T), Ni(II) concentration (C), OSP dose (D), and contact time (t). The results show that their percentage contribution in descending order is P (64.3%) > T (18.9%) > C (8.8%) > D (5.1%) > t (1.7%). The optimum condition is pH of 10 and OSP calcined temperature of 900 °C. Under the optimum condition, the Ni(II) can be removed almost completely; the higher the pH, the more the precipitation; the higher the calcined temperature, the more the adsorption. The latter is due to the large number of porosities created at the calcination temperature of 900 °C. The porosities generate a large amount of cavities which significantly increase the surface area for adsorption. A multiple linear regression equation obtained to correlate Ni(II) removal with the controllable factors is: Ni(II) removal(%) = 10.35 × P + 0.045 × T - 1.29 × C + 19.33 × D + 0.09 × t - 59.83. This equation predicts Ni(II) removal well and can be used for estimating Ni(II) removal during the design stage of Ni(II) removal by calcined OSP. Thus, OSP can be used to remove nickel effectively and the formula for removal prediction is developed for practical applications. PMID:26203873

  18. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  19. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation. PMID:25155192

  20. Determination of hydroxy acids in cosmetics by chemometric experimental design and cyclodextrin-modified capillary electrophoresis.

    PubMed

    Liu, Pei-Yu; Lin, Yi-Hui; Feng, Chia Hsien; Chen, Yen-Ling

    2012-10-01

    A CD-modified CE method was established for quantitative determination of seven hydroxy acids in cosmetic products. This method involved chemometric experimental design aspects, including fractional factorial design and central composite design. Chemometric experimental design was used to enhance the method's separation capability and to explore the interactions between parameters. Compared to the traditional investigation that uses multiple parameters, the method that used chemometric experimental design was less time-consuming and lower in cost. In this study, the influences of three experimental variables (phosphate concentration, surfactant concentration, and methanol percentage) on the experimental response were investigated by applying a chromatographic resolution statistic function. The optimized conditions were as follows: a running buffer of 150 mM phosphate solution (pH 7) containing 0.5 mM CTAB, 3 mM γ-CD, and 25% methanol; 20 s sample injection at 0.5 psi; a separation voltage of -15 kV; temperature was set at 25°C; and UV detection at 200 nm. The seven hydroxy acids were well separated in less than 10 min. The LOD (S/N = 3) was 625 nM for both salicylic acid and mandelic acid. The correlation coefficient of the regression curve was greater than 0.998. The RSD and relative error values were all less than 9.21%. After optimization and validation, this simple and rapid analysis method was considered to be established and was successfully applied to several commercial cosmetic products. PMID:22996609

  1. Adaptive combinatorial design to explore large experimental spaces: approach and validation.

    PubMed

    Lejay, L V; Shasha, D E; Palenchar, P M; Kouranov, A Y; Cruikshank, A A; Chou, M F; Coruzzi, G M

    2004-12-01

    Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses. PMID:17051692

  2. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  3. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  4. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  5. Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  6. Experimental designs for evaluation of genetic variability and selection of ancient grapevine varieties: a simulation study.

    PubMed

    Gonçalves, E; St Aubyn, A; Martins, A

    2010-06-01

    Classical methodologies for grapevine selection used in the vine-growing world are generally based on comparisons among a small number of clones. This does not take advantage of the entire genetic variability within ancient varieties, and therefore limits selection challenges. Using the general principles of plant breeding and of quantitative genetics, we propose new breeding strategies, focussed on conservation and quantification of genetic variability by performing a cycle of mass genotypic selection prior to clonal selection. To exploit a sufficiently large amount of genetic variability, initial selection trials must be generally very large. The use of experimental designs adequate for those field trials has been intensively recommended for numerous species. However, their use in initial trials of grapevines has not been studied. With the aim of identifying the most suitable experimental designs for quantification of genetic variability and selection of ancient varieties, a study was carried out to assess through simulation the comparative efficiency of various experimental designs (randomized complete block design, alpha design and row-column (RC) design). The results indicated a greater efficiency for alpha and RC designs, enabling more precise estimates of genotypic variance, greater precision in the prediction of genetic gain and consequently greater efficiency in genotypic mass selection. PMID:19904297

  7. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  8. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  9. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  10. The effectiveness of family planning programs evaluated with true experimental designs.

    PubMed Central

    Bauman, K E

    1997-01-01

    OBJECTIVES: This paper describes the magnitude of effects for family planning programs evaluated with true experimental designs. METHODS: Studies that used true experimental designs to evaluate family planning programs were identified and their results subjected to meta-analysis. RESULTS: For the 14 studies with the information needed to calculate effect size, the Pearson r between program and effect variables ranged from -.08 to .09 and averaged .08. CONCLUSIONS: The programs evaluated in the studies considered have had, on average, smaller effects than many would assume and desire. PMID:9146451

  11. Experimental and Numerical Investigations on the Ballistic Performance of Polymer Matrix Composites Used in Armor Design

    NASA Astrophysics Data System (ADS)

    Colakoglu, M.; Soykasap, O.; Özek, T.

    2007-01-01

    Ballistic properties of two different polymer matrix composites used for military and non-military purposes are investigated in this study. Backside deformation and penetration speed are determined experimentally and numerically for Kevlar 29/Polivnyl Butyral and Polyethylene fiber composites because designing armors for only penetration is not enough for protection. After experimental ballistic tests, a model is constructed using finite element program, Abaqus. The backside deformation and penetration speed are determined numerically. It is found that the experimental and numeric results are in agreement and Polyethylene fiber composite has much better ballistic limit, the backside deformation, and penetration speed than those of Kevlar 29/Polivnyl Butyral composite if areal densities are considered.

  12. Design of Experimental Data Publishing Software for Neutral Beam Injector on EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Zhang, Xiaodan; Wu, Deyun

    2015-02-01

    Neutral Beam Injection (NBI) is one of the most effective means for plasma heating. Experimental Data Publishing Software (EDPS) is developed to publish experimental data to get the NBI system under remote monitoring. In this paper, the architecture and implementation of EDPS including the design of the communication module and web page display module are presented. EDPS is developed based on the Browser/Server (B/S) model, and works under the Linux operating system. Using the data source and communication mechanism of the NBI Control System (NBICS), EDPS publishes experimental data on the Internet.

  13. Modeling of retardance in ferrofluid with Taguchi-based multiple regression analysis

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Fung; Wu, Jyh-Shyang; Sheu, Jer-Jia

    2015-03-01

    The citric acid (CA) coated Fe3O4 ferrofluids are prepared by a co-precipitation method and the magneto-optical retardance property is measured by a Stokes polarimeter. Optimization and multiple regression of retardance in ferrofluids are executed by combining Taguchi method and Excel. From the nine tests for four parameters, including pH of suspension, molar ratio of CA to Fe3O4, volume of CA, and coating temperature, influence sequence and excellent program are found. Multiple regression analysis and F-test on the significance of regression equation are performed. It is found that the model F value is much larger than Fcritical and significance level P <0.0001. So it can be concluded that the regression model has statistically significant predictive ability. Substituting excellent program into equation, retardance is obtained as 32.703°, higher than the highest value in tests by 11.4%.

  14. Using R in experimental design with BIBD: An application in health sciences

    NASA Astrophysics Data System (ADS)

    Oliveira, Teresa A.; Francisco, Carla; Oliveira, Amílcar; Ferreira, Agostinho

    2016-06-01

    Considering the implementation of an Experimental Design, in any field, the experimenter must pay particular attention and look for the best strategies in the following steps: planning the design selection, conduct the experiments, collect observed data, proceed to analysis and interpretation of results. The focus is on providing both - a deep understanding of the problem under research and a powerful experimental process at a reduced cost. Mainly thanks to the possibility of allowing to separate variation sources, the importance of Experimental Design in Health Sciences is strongly recommended since long time. Particular attention has been devoted to Block Designs and more precisely to Balanced Incomplete Block Designs - in this case the relevance states from the fact that these designs allow testing simultaneously a number of treatments bigger than the block size. Our example refers to a possible study of inter reliability of the Parkinson disease, taking into account the UPDRS (Unified Parkinson's disease rating scale) in order to test if there are significant differences between the specialists who evaluate the patients performances. Statistical studies on this disease were described for example in Richards et al (1994), where the authors investigate the inter-rater Reliability of the Unified Parkinson's Disease Rating Scale Motor Examination. We consider a simulation of a practical situation in which the patients were observed by different specialists and the UPDRS on assessing the impact of Parkinson's disease in patients was observed. Assigning treatments to the subjects following a particular BIBD(9,24,8,3,2) structure, we illustrate that BIB Designs can be used as a powerful tool to solve emerging problems in this area. Once a structure with repeated blocks allows to have some block contrasts with minimum variance, see Oliveira et al. (2006), the design with cardinality 12 was selected for the example. R software was used for computations.

  15. Design and experimental study of high-speed low-flow-rate centrifugal compressors

    SciTech Connect

    Gui, F.; Reinarts, T.R.; Scaringe, R.P.; Gottschlich, J.M.

    1995-12-31

    This paper describes a design and experimental effort to develop small centrifugal compressors for aircraft air cycle cooling systems and small vapor compression refrigeration systems (20--100 tons). Efficiency improvements at 25% are desired over current designs. Although centrifugal compressors possess excellent performance at high flow rates, low-flow-rate compressors do not have acceptable performance when designed using current approaches. The new compressors must be designed to operate at a high rotating speed to retain efficiency. The emergence of the magnetic bearing provides the possibility of developing such compressors that run at speeds several times higher than current dominating speeds. Several low-flow-rate centrifugal compressors, featured with three-dimensional blades, have been designed, manufactured and tested in this study. An experimental investigation of compressor flow characteristics and efficiency has been conducted to explore a theory for mini-centrifugal compressors. The effects of the overall impeller configuration, number of blades, and the rotational speed on compressor flow curve and efficiency have been studied. Efficiencies as high as 84% were obtained. The experimental results indicate that the current theory can still be used as a guide, but further development for the design of mini-centrifugal compressors is required.

  16. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  17. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  18. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    ERIC Educational Resources Information Center

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  19. Design and Experimental Investigation of a Single-stage Turbine with a Downstream Stator

    NASA Technical Reports Server (NTRS)

    Plohr, Henry W; Holeski, Donald E; Forrette, Robert E

    1957-01-01

    The high-work-output turbine had an experimental efficiency of 0.830 at the design point and a maximum efficiency of 0.857. The downstream stator was effective in providing axial flow out of the turbine for almost the whole range of turbine operation.

  20. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  1. Using Superstitions & Sayings To Teach Experimental Design in Beginning and Advanced Biology Classes.

    ERIC Educational Resources Information Center

    Hoefnagels, Marielle H.; Rippel, Scott A.

    2003-01-01

    Presents a collaborative learning exercise intended to teach the unfamiliar terminology of experimental design both in biology classes and biochemistry laboratories. The exercise promotes discussion and debate, develops communication skills, and emphasizes peer review. The effectiveness of the exercise is supported by student surveys. (SOE)

  2. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged. PMID:26065534

  3. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…

  4. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  5. Multiple Measures of Juvenile Drug Court Effectiveness: Results of a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Rodriguez, Nancy; Webb, Vincent J.

    2004-01-01

    Prior studies of juvenile drug courts have been constrained by small samples, inadequate comparison groups, or limited outcome measures. The authors report on a 3-year evaluation that examines the impact of juvenile drug court participation on recidivism and drug use. A quasi-experimental design is used to compare juveniles assigned to drug court…

  6. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    ERIC Educational Resources Information Center

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  7. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  8. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    ERIC Educational Resources Information Center

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  9. EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY. FINAL REPORT.

    ERIC Educational Resources Information Center

    KOHR, RICHARD L.; WOLFE, GEORGE P.

    AN EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY WAS UNDERTAKEN TO DEVELOP A PROPOSED CURRICULUM OUTLINE AND ADMISSION STANDARDS FOR OTHER INSTITUTIONS IN THE PLANNING OF PROGRAMS TO TRAIN COMPUTER PROGRAMMERS. OF THE FIRST CLASS OF 26 STUDENTS, 17 COMPLETED THE PROGRAM AND 12 (INCLUDING ONE WHO DID NOT GRADUATE) WERE…

  10. Quiet Clean Short-haul Experimental Engine (QCSEE) Over The Wing (OTW) design report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and testing of two experimental high bypass geared turbofan engines and propulsion systems for short haul passenger aircraft are described. The propulsion technology required for future externally blown flap aircraft with engines located both under the wing and over the wing is demonstrated. Composite structures and digital engine controls are among the topics included.

  11. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  12. SELF-INSTRUCTIONAL SUPPLEMENTS FOR A TELEVISED PHYSICS COURSE, STUDY PLAN AND EXPERIMENTAL DESIGN.

    ERIC Educational Resources Information Center

    KLAUS, DAVID J.; LUMSDAINE, ARTHUR A.

    THE INITIAL PHASES OF A STUDY OF SELF-INSTRUCTIONAL AIDS FOR A TELEVISED PHYSICS COURSE WERE DESCRIBED. THE APPROACH, EXPERIMENTAL DESIGN, PROCEDURE, AND TECHNICAL ASPECTS OF THE STUDY PLAN WERE INCLUDED. THE MATERIALS WERE PREPARED TO SUPPLEMENT THE SECOND SEMESTER OF HIGH SCHOOL PHYSICS. THE MATERIAL COVERED STATIC AND CURRENT ELECTRICITY,…

  13. Bayesian experimental design for identification of model propositions and conceptual model uncertainty reduction

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2015-09-01

    The lack of hydrogeological data and knowledge often results in different propositions (or alternatives) to represent uncertain model components and creates many candidate groundwater models using the same data. Uncertainty of groundwater head prediction may become unnecessarily high. This study introduces an experimental design to identify propositions in each uncertain model component and decrease the prediction uncertainty by reducing conceptual model uncertainty. A discrimination criterion is developed based on posterior model probability that directly uses data to evaluate model importance. Bayesian model averaging (BMA) is used to predict future observation data. The experimental design aims to find the optimal number and location of future observations and the number of sampling rounds such that the desired discrimination criterion is met. Hierarchical Bayesian model averaging (HBMA) is adopted to assess if highly probable propositions can be identified and the conceptual model uncertainty can be reduced by the experimental design. The experimental design is implemented to a groundwater study in the Baton Rouge area, Louisiana. We design a new groundwater head observation network based on existing USGS observation wells. The sources of uncertainty that create multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. All possible design solutions are enumerated using a multi-core supercomputer. Several design solutions are found to achieve an 80%-identifiable groundwater model in 5 years by using six or more existing USGS wells. The HBMA result shows that each highly probable proposition can be identified for each uncertain model component once the discrimination criterion is achieved. The variances of groundwater head predictions are significantly decreased by reducing posterior model probabilities of unimportant propositions.

  14. Design and structural verification of locomotive bogies using combined analytical and experimental methods

    NASA Astrophysics Data System (ADS)

    Manea, I.; Popa, G.; Girnita, I.; Prenta, G.

    2015-11-01

    The paper presents a practical methodology for design and structural verification of the locomotive bogie frames using a modern software package for design, structural verification and validation through combined, analytical and experimental methods. In the initial stage, the bogie geometry is imported from a CAD program into a finite element analysis program, such as Ansys. The analytical model validation is done by experimental modal analysis carried out on a finished bogie frame. The bogie frame own frequencies and own modes by both experimental and analytic methods are determined and the correlation analysis of the two types of models is performed. If the results are unsatisfactory, the structural optimization should be performed. If the results are satisfactory, the qualification procedures follow by static and fatigue tests carried out in a laboratory with international accreditation in the field. This paper presents an application made on bogie frames for the LEMA electric locomotive of 6000 kW.

  15. Intermediate experimental vehicle, ESA program aerodynamics-aerothermodynamics key technologies for spacecraft design and successful flight

    NASA Astrophysics Data System (ADS)

    Dutheil, Sylvain; Pibarot, Julien; Tran, Dac; Vallee, Jean-Jacques; Tribot, Jean-Pierre

    2016-07-01

    With the aim of placing Europe among the world's space players in the strategic area of atmospheric re-entry, several studies on experimental vehicle concepts and improvements of critical re-entry technologies have paved the way for the flight of an experimental space craft. The successful flight of the Intermediate eXperimental Vehicle (IXV), under ESA's Future Launchers Preparatory Programme (FLPP), is definitively a significant step forward from the Atmospheric Reentry Demonstrator flight (1998), establishing Europe as a key player in this field. The IXV project objectives were the design, development, manufacture and ground and flight verification of an autonomous European lifting and aerodynamically controlled reentry system, which is highly flexible and maneuverable. The paper presents, the role of aerodynamics aerothermodynamics as part of the key technologies for designing an atmospheric re-entry spacecraft and securing a successful flight.

  16. Experimental system design for the integration of trapped-ion and superconducting qubit systems

    NASA Astrophysics Data System (ADS)

    De Motte, D.; Grounds, A. R.; Rehák, M.; Rodriguez Blanco, A.; Lekitsch, B.; Giri, G. S.; Neilinger, P.; Oelsner, G.; Il'ichev, E.; Grajcar, M.; Hensinger, W. K.

    2016-07-01

    We present a design for the experimental integration of ion trapping and superconducting qubit systems as a step towards the realization of a quantum hybrid system. The scheme addresses two key difficulties in realizing such a system: a combined microfabricated ion trap and superconducting qubit architecture, and the experimental infrastructure to facilitate both technologies. Developing upon work by Kielpinski et al. (Phys Rev Lett 108(13):130504, 2012. doi: 10.1103/PhysRevLett.108.130504), we describe the design, simulation and fabrication process for a microfabricated ion trap capable of coupling an ion to a superconducting microwave LC circuit with a coupling strength in the tens of kHz. We also describe existing difficulties in combining the experimental infrastructure of an ion trapping set-up into a dilution refrigerator with superconducting qubits and present solutions that can be immediately implemented using current technology.

  17. A multi-purpose SAIL demonstrator design and its principle experimental verification

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Yan, Aimin; Xu, Nan; Wang, Lijuan; Luan, Zhu; Sun, Jianfeng; Liu, Liren

    2009-08-01

    A fully 2-D synthetic aperture imaging ladar (SAIL) demonstrator is designed and being fabricated to experimentally investigate and theoretically analyze the beam diffraction properties, antenna function, imaging resolution and signal processing algorithm of SAIL. The design details of the multi-purpose SAIL demonstrator are given and, as the first phase, a laboratory-scaled SAIL system based on bulk optical elements has been built to verify the principle of design, which is similar in construction to the demonstrator but without the major antenna telescope. The system has the aperture diameter of about 1mm and the target distance of 3.2m.

  18. Design considerations and experimental results of a 100 W, 500 000 rpm electrical generator

    NASA Astrophysics Data System (ADS)

    Zwyssig, C.; Kolar, J. W.

    2006-09-01

    Mesoscale gas turbine generator systems are a promising solution for high energy and power density portable devices. This paper focuses on the design of a 100 W, 500 000 rpm generator suitable for use with a gas turbine. The design procedure selects the suitable machine type and bearing technology, and determines the electromagnetic characteristics. The losses caused by the high frequency operation are minimized by optimizing the winding and the stator core material. The final design is a permanent-magnet machine with a volume of 3 cm3 and experimental measurements from a test bench are presented.

  19. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak

    SciTech Connect

    Huang, J. Wan, B.; Hu, L.; Hu, C.; Heidbrink, W. W.; Zhu, Y.; Hellermann, M. G. von; Gao, W.; Wu, C.; Li, Y.; Fu, J.; Lyu, B.; Yu, Y.; Ye, M.; Shi, Y.

    2014-11-15

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented.

  20. A three-phase series-parallel resonant converter -- analysis, design, simulation and experimental results

    SciTech Connect

    Bhat, A.K.S.; Zheng, L.

    1995-12-31

    A three-phase dc-to-dc series-parallel resonant converter is proposed and its operating modes for 180{degree} wide gating pulse scheme are explained. A detailed analysis of the converter using constant current model and Fourier series approach is presented. Based on the analysis, design curves are obtained and a design example of 1 kW converter is given. SPICE simulation results for the designed converter and experimental results for a 500 W converter are presented to verify the performance of the proposed converter for varying load conditions. The converter operates in lagging PF mode for the entire load range and requires a narrow variation in switching frequency.

  1. Development of A595 Explosion-Resistant Container Design. Numerical, Theoretical and Experimental Justification of the Container Design Parameters

    SciTech Connect

    Abakumov, A. I.; Devyatkin, I. V.; Meltsas, V. Yu.; Mikhailov, A. L.; Portnyagina, G. F.; Rusak, V. N.; Solovyev, V. P.; Syrunin, M. A.; Treshalin, S. M.; Fedorenko, A. G.

    2006-08-03

    The paper presents the results of numerical and experimental study on the AT595 metal-composite container designed in VNIIEF within the framework of international collaboration with SNL (USA). This container must completely contain products of an 8-kg-TNT detonation cased in 35 kg of inert surrounding material. Numerical and theoretical studies have been carried out of the containment capacity and fracture of small-scale open cylinder test units and container pressure vessel models subjected to different levels of specific explosive load (beneath, equal to and above the required design load defined for this container), and two AT595 containers have been tested for the design load and a higher load.

  2. Designing specific protein–protein interactions using computation, experimental library screening, or integrated methods

    PubMed Central

    Chen, T Scott; Keating, Amy E

    2012-01-01

    Given the importance of protein–protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity. PMID:22593041

  3. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  4. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    SciTech Connect

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also, the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.

  5. Design and Experimental Results for the S827 Airfoil; Period of Performance: 1998--1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 21%-thick, natural-laminar-flow airfoil, the S827, for the 75% blade radial station of 40- to 50-meter, stall-regulated, horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The primary objective of restrained maximum lift has not been achieved, although the maximum lift is relatively insensitive to roughness, which meets the design goal. The airfoil exhibits a relatively docile stall, which meets the design goal. The primary objective of low profile drag has been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement with the exception of maximum lift, which is significantly underpredicted.

  6. Fertilizer Response Curves for Commercial Southern Forest Species Defined with an Un-Replicated Experimental Design.

    SciTech Connect

    Coleman, Mark; Aubrey, Doug; Coyle, David, R.; Daniels, Richard, F.

    2005-11-01

    There has been recent interest in use of non-replicated regression experimental designs in forestry, as the need for replication in experimental design is burdensome on limited research budgets. We wanted to determine the interacting effects of soil moisture and nutrient availability on the production of various southeastern forest trees (two clones of Populus deltoides, open pollinated Platanus occidentalis, Liquidambar styraciflua and Pinus taeda). Additionally, we required an understanding of the fertilizer response curve. To accomplish both objectives we developed a composite design that includes a core ANOVA approach to consider treatment interactions, with the addition of non-replicated regression plots receiving a range of fertilizer levels for the primary irrigation treatment.

  7. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  8. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  9. Thermoelastic Femoral Stress Imaging for Experimental Evaluation of Hip Prosthesis Design

    NASA Astrophysics Data System (ADS)

    Hyodo, Koji; Inomoto, Masayoshi; Ma, Wenxiao; Miyakawa, Syunpei; Tateishi, Tetsuya

    An experimental system using the thermoelastic stress analysis method and a synthetic femur was utilized to perform reliable and convenient mechanical biocompatibility evaluation of hip prosthesis design. Unlike the conventional technique, the unique advantage of the thermoelastic stress analysis method is its ability to image whole-surface stress (Δ(σ1+σ2)) distribution in specimens. The mechanical properties of synthetic femurs agreed well with those of cadaveric femurs with little variability between specimens. We applied this experimental system for stress distribution visualization of the intact femur, and the femurs implanted with an artificial joint. The surface stress distribution of the femurs sensitively reflected the prosthesis design and the contact condition between the stem and the bone. By analyzing the relationship between the stress distribution and the clinical results of the artificial joint, this technique can be used in mechanical biocompatibility evaluation and pre-clinical performance prediction of new artificial joint design.

  10. Multi-objective optimization in WEDM of D3 tool steel using integrated approach of Taguchi method & Grey relational analysis

    NASA Astrophysics Data System (ADS)

    Shivade, Anand S.; Shinde, Vasudev D.

    2014-09-01

    In this paper, wire electrical discharge machining of D3 tool steel is studied. Influence of pulse-on time, pulse-off time, peak current and wire speed are investigated for MRR, dimensional deviation, gap current and machining time, during intricate machining of D3 tool steel. Taguchi method is used for single characteristics optimization and to optimize all four process parameters simultaneously, Grey relational analysis (GRA) is employed along with Taguchi method. Through GRA, grey relational grade is used as a performance index to determine the optimal setting of process parameters for multi-objective characteristics. Analysis of variance (ANOVA) shows that the peak current is the most significant parameters affecting on multi-objective characteristics. Confirmatory results, proves the potential of GRA to optimize process parameters successfully for multi-objective characteristics.

  11. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  12. Design optimization and experimental testing of the High-Flux Test Module of IFMIF

    NASA Astrophysics Data System (ADS)

    Leichtle, D.; Arbeiter, F.; Dolensky, B.; Fischer, U.; Gordeev, S.; Heinzel, V.; Ihli, T.; Moeslang, A.; Simakov, S. P.; Slobodchuk, V.; Stratmanns, E.

    2009-04-01

    The design of the High-Flux Test Module of the International Fusion Material Irradiation Facility has been developed continuously in the past few years. The present paper highlights recent design achievements, including a thorough state-of-the-art validation assessment of CFD tools and models. Along with design related analyses exercises on manufacturing procedures have been performed. Recommendations for the use of container, rig, and capsule materials as well as recent progress in brazing of electrical heaters are discussed. A test matrix starting from High-Flux Test Module compartments, i.e. segments of the full module, with heated dummy rigs up to the full-scale module with instrumented irradiation rigs has been developed and the appropriate helium gas loop has been designed conceptually. A roadmap of the envisaged experimental activities is presented in accordance with the test loop facility construction and mock-up design and fabrication schedules.

  13. Study and design of cryogenic propellant acquisition systems. Volume 2: Supporting experimental program

    NASA Technical Reports Server (NTRS)

    Burge, G. W.; Blackmon, J. B.

    1973-01-01

    Areas of cryogenic fuel systems were identified where critical experimental information was needed either to define a design criteria or to establish the feasibility of a design concept or a critical aspect of a particular design. Such data requirements fell into three broad categories: (1) basic surface tension screen characteristics; (2) screen acquisition device fabrication problems; and (3) screen surface tension device operational failure modes. To explore these problems and to establish design criteria where possible, extensive laboratory or bench test scale experiments were conducted. In general, these proved to be quite successful and, in many instances, the test results were directly used in the system design analyses and development. In some cases, particularly those relating to operational-type problems, areas requiring future research were identified, especially screen heat transfer and vibrational effects.

  14. Experimental investigation of two transonic linear turbine cascades at off-design conditions

    NASA Astrophysics Data System (ADS)

    Jouini, Dhafer Ben Mahmoud

    Detailed measurements have been made of the mid-span aerodynamic performance of two transonic turbine cascades at off-design conditions. The cascades investigated were a baseline cascade, designated HS1A, and a cascade with a modified leading edge design, designated HS1B. The measurements were for exit Mach numbers ranging from about 0.5 to about 1.2 and for Reynolds numbers from 4 x 105 to 106. The turbulence intensity in the test section and upstream of the cascade test section was about 4%. The profile losses were measured for the incidence values of -10°, 0.0°, +4.5°, +10.0°, and +14.5° relative to design. To aid in understanding the loss behaviour and to provide other insights into the flow physics, measurements of the blade loading, exit flow angles, trailing-edge base pressures, and the Axial Velocity Density Ratio (AVDR) were also made. The results showed that the profile losses at transonic Mach numbers can be closely related to the behaviour of the base pressure. The losses were also found to be affected by the AVDR. The AVDRs were found to decrease with increasing positive incidence. Moreover the results from both cascades showed that the modifications to the leading edge geometry of HS1B cascade were not successful in improving the blade performance at positive off-design incidence. Comparisons between the present experimental data and the available correlations in the open literature were also made. These comparisons included mid-span losses at design and off-design, and exit flow angles. It was found that further improvements can still be made to the existing correlations. Furthermore, the present experimental data represents a significant contribution to the data base of results available in the open literature for the development of new and improved correlations, particularly at transonic flow conditions, at both design and off-design conditions.

  15. The optimum Ga-67-citrate gamma camera imaging quality factors as first calculated and shown by the Taguchi's analysis.

    PubMed

    Yeh, Da Ming; Chang, Pai Jung; Pan, Lung Kwang

    2013-01-01

    In this work gallium-67 ((67)Ga) gamma camera imaging quality was optimized using the Taguchi's analysis and a planar phantom. The acrylic planar phantom was LASER-cut to form groups of slits 1mm wide and 5mm deep, to determine the spatial resolution and contrast ratio that could be achieved in a (67)Ga citrate nuclear medicine examination. The (67)Ga-citrate solution was injected into the slits to form an active radioactive line source which was placed between regular acrylic plates for optimization. Then, nine combinations of four operating factors: L9 (3((4)), of the gamma camera imaging system were used and followed the Taguchi's analysis. The four operating factors were: a) the type of collimator in front of the NaI(Tl) detector, b) the region of interest of (67)Ga gamma rays spectrum, c) the scanning speed of NaI(Tl) detector head and d) the activity of (67)Ga. The original judged grade of the planar phantom image quality was increased 36% and factors a) and b) were confirmed to dominate. The cross interaction among factors was also discussed. Our results showed that the optimal factor settings of the gamma camera imaging system were verified by performing a routine nuclear medicine examination in ten cases. Nine cases showed the same optimal settings as estimated by three highly trained radio-diagnostic physicians. Additionally, the optimal setting yielded clearer images with greater contrast than did the conventional settings. In conclusion, this work suggests for practical use an optimized process for determining both the spatial resolution and the contrast ratio of a gamma camera imaging system using Taguchi's optimal analysis and a planar phantom. The Taguchi's method is most effective in targeting a single quality characteristic but can also be extended to satisfy multiple requirements under specific conditions by revising the definition of signal to noise ratio. PMID:23529390

  16. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  17. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  18. A design of experiment study of plasma sprayed alumina-titania coatings

    SciTech Connect

    Steeper, T.J. and Co., Aiken, SC . Savannah River Lab.); Varacalle, D.J. Jr.; Wilson, G.C. ); Riggs, W.L. II ); Rotolico, A.J.; Nerz, J.E. )

    1992-01-01

    An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters.

  19. A design of experiment study of plasma sprayed alumina-titania coatings

    SciTech Connect

    Steeper, T.J.; Varacalle, D.J. Jr.; Wilson, G.C.; Riggs, W.L. II; Rotolico, A.J.; Nerz, J.E.

    1992-08-01

    An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters.

  20. Analytical and experimental investigation of liquid double drop dynamics: Preliminary design for space shuttle experiments

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The preliminary grant assessed the use of laboratory experiments for simulating low g liquid drop experiments in the space shuttle environment. Investigations were begun of appropriate immiscible liquid systems, design of experimental apparatus and analyses. The current grant continued these topics, completed construction and preliminary testing of the experimental apparatus, and performed experiments on single and compound liquid drops. A continuing assessment of laboratory capabilities, and the interests of project personnel and available collaborators, led to, after consultations with NASA personnel, a research emphasis specializing on compound drops consisting of hollow plastic or elastic spheroids filled with liquids.

  1. Design and Experimental Results for a Natural-Laminar-Flow Airfoil for General Aviation Applications

    NASA Technical Reports Server (NTRS)

    Somers, D. M.

    1981-01-01

    A natural-laminar-flow airfoil for general aviation applications, the NLF(1)-0416, was designed and analyzed theoretically and verified experimentally in the Langley Low-Turbulence Pressure Tunnel. The basic objective of combining the high maximum lift of the NASA low-speed airfoils with the low cruise drag of the NACA 6-series airfoils was achieved. The safety requirement that the maximum lift coefficient not be significantly affected with transition fixed near the leading edge was also met. Comparisons of the theoretical and experimental results show excellent agreement. Comparisons with other airfoils, both laminar flow and turbulent flow, confirm the achievement of the basic objective.

  2. Intuitive web-based experimental design for high-throughput biomedical data.

    PubMed

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760

  3. 100% MOX BWR experimental program design using multi-parameter representative

    SciTech Connect

    Blaise, P.; Fougeras, P.; Cathalau, S.

    2012-07-01

    A new multiparameter representative approach for the design of Advanced full MOX BWR core physics experimental programs is developed. The approach is based on sensitivity analysis of integral parameters to nuclear data, and correlations among different integral parameters. The representativeness method is here used to extract a quantitative relationship between a particular integral response of an experimental mock-up and the same response in a reference project to be designed. The study is applied to the design of the 100% MOX BASALA ABWR experimental program in the EOLE facility. The adopted scheme proposes an original approach to the problem, going from the initial 'microscopic' pin-cells integral parameters to the whole 'macroscopic' assembly integral parameters. This approach enables to collect complementary information necessary to optimize the initial design and to meet target accuracy on the integral parameters to be measured. The study has demonstrated the necessity of new fuel pins fabrication, fulfilling minimal costs requirements, to meet acceptable representativeness on local power distribution. (authors)

  4. Intuitive Web-Based Experimental Design for High-Throughput Biomedical Data

    PubMed Central

    Friedrich, Andreas; Kenar, Erhan; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760

  5. Inhalation experiments with mixtures of hydrocarbons. Experimental design, statistics and interpretation of kinetics and possible interactions.

    PubMed

    Eide, I; Zahlsen, K

    1996-01-01

    The paper describes experimental and statistical methods for toxicokinetic evaluation of mixtures in inhalation experiments. Synthetic mixtures of three C9 n-paraffinic, naphthenic and aromatic hydrocarbons (n-nonane, trimethylcyclohexane and trimethylbenzene, respectively) were studied in the rat after inhalation for 12h. The hydrocarbons were mixed according to principles for statistical experimental design using mixture design at four vapour levels (75, 150, 300 and 450 ppm) to support an empirical model with linear, interaction and quadratic terms (Taylor polynome). Immediately after exposure, concentrations of hydrocarbons were measured by head space gas chromatography in blood, brain, liver, kidneys and perirenal fat. Multivariate data analysis and modelling were performed with PLS (projections to latent structures). The best models were obtained after removing all interaction terms, suggesting that there were no interactions between the hydrocarbons with respect to absorption and distribution. Uptake of paraffins and particularly aromatics is best described by quadratic models, whereas the uptake of the naphthenic hydrocarbons is nearly linear. All models are good, with high correlation (r2) and prediction properties (Q2), the latter after cross validation. The concentrations of aromates in blood were high compared to the other hydrocarbons. At concentrations below 250 ppm, the naphthene reached higher concentrations in the brain compared to the paraffin and the aromate. Statistical experimental design, multivariate data analysis and modelling have proved useful for the evaluation of synthetic mixtures. The principles may also be used in the design of liquid mixtures, which may be evaporated partially or completely. PMID:8740533

  6. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  7. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  8. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    NASA Astrophysics Data System (ADS)

    Schur, W. W.

    2004-01-01

    Excess in skin material of a pneumatic envelope beyond what is required for minimum enclosure of a gas bubble is a necessary but by no means sufficient condition for the existence of multiple equilibrium configurations for that pneumatic envelope. The very design of structurally efficient super-pressure balloons of the pumpkin shape type requires such excess. Undesired stable equilibria in pumpkin shape balloons have been observed on experimental pumpkin shape balloons. These configurations contain regions with stress levels far higher than those predicted for the cyclically symmetric design configuration under maximum pressurization. Successful designs of pumpkin shape super-pressure balloons do not allow such undesired stable equilibria under full pressurization. This work documents efforts made so far and describes efforts still underway by the National Aeronautics and Space Administration's Balloon Program Office to arrive on guidance on the design of pumpkin shape super-pressure balloons that guarantee full and proper deployment.

  9. Engineering at SLAC: Designing and constructing experimental devices for the Stanford Synchrotron Radiation Lightsource - Final Paper

    SciTech Connect

    Djang, Austin

    2015-08-22

    Thanks to the versatility of the beam lines at SSRL, research there is varied and benefits multiple fields. Each experiment requires a particular set of experiment equipment, which in turns requires its own particular assembly. As such, new engineering challenges arise from each new experiment. My role as an engineering intern has been to help solve these challenges, by designing and assembling experimental devices. My first project was to design a heated sample holder, which will be used to investigate the effect of temperature on a sample's x-ray diffraction pattern. My second project was to help set up an imaging test, which involved designing a cooled grating holder and assembling multiple positioning stages. My third project was designing a 3D-printed pencil holder for the SSRL workstations.

  10. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  11. Designation and Implementation of Microcomputer Principle and Interface Technology Virtual Experimental Platform Website

    NASA Astrophysics Data System (ADS)

    Gao, JinYue; Tang, Yin

    This paper explicitly discusses the designation and implementation thought and method of Microcomputer Principle and Interface Technology virtual experimental platform website construction. The instructional design of this platform mainly follows with the students-oriented constructivism learning theory, and the overall structure is subject to the features of teaching aims, teaching contents and interactive methods. Virtual experiment platform production and development should fully take the characteristics of network operation into consideration and adopt relevant technologies to improve the effect and speed of network software application in internet.

  12. Active vibration absorber for the CSI evolutionary model - Design and experimental results. [Controls Structures Interaction

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstrations to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility has been developed to study practical implementation of new control technologies under realistic conditions. The paper discusses the design of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. Experimental results in the presence of these factors are presented and discussed. The robustness of this design under model uncertainty is demonstrated.

  13. An experimental investigation of two 15 percent-scale wind tunnel fan-blade designs

    NASA Technical Reports Server (NTRS)

    Signor, David B.

    1988-01-01

    An experimental 3-D investigation of two fan-blade designs was conducted. The fan blades tested were 15 percent-scale models of blades to be used in the fan drive of the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. NACA 65- and modified NACA 65-series sections incorporated increased thickness on the upper surface, between the leading edge and the one-half-chord position. Twist and taper were the same for both blade designs. The fan blades with modified 65-series sections were found to have an increased stall margin when they were compared with the unmodified blades.

  14. Theoretical and Experimental Investigation of Mufflers with Comments on Engine-Exhaust Muffler Design

    NASA Technical Reports Server (NTRS)

    Davis, Don D , Jr; Stokes, George M; Moore, Dewey; Stevens, George L , Jr

    1954-01-01

    Equations are presented for the attenuation characteristics of single-chamber and multiple-chamber mufflers of both the expansion-chamber and resonator types, for tuned side-branch tubes, and for the combination of an expansion chamber with a resonator. Experimental curves of attenuation plotted against frequency are presented for 77 different mufflers with a reflection-free tailpipe termination. The experiments were made at room temperature without flow; the sound source was a loud-speaker. A method is given for including the tailpipe reflections in the calculations. Experimental attenuation curves are presented for four different muffler-tailpipe combinations, and the results are compared with the theory. The application of the theory to the design of engine-exhaust mufflers is discussed, and charts are included for the assistance of the designer.

  15. Design and Experimental Results for the S825 Airfoil; Period of Performance: 1998-1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 17%-thick, natural-laminar-flow airfoil, the S825, for the 75% blade radial station of 20- to 40-meter, variable-speed and variable-pitch (toward feather), horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The two primary objectives of high maximum lift, relatively insensitive to roughness and low-profile drag have been achieved. The airfoil exhibits a rapid, trailing-edge stall, which does not meet the design goal of a docile stall. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement.

  16. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  17. Optimal design and experimental analyses of a new micro-vibration control payload-platform

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen

    2016-07-01

    This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.

  18. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  19. Self-healing in segmented metallized film capacitors: Experimental and theoretical investigations for engineering design

    NASA Astrophysics Data System (ADS)

    Belko, V. O.; Emelyanov, O. A.

    2016-01-01

    A significant increase in the efficiency of modern metallized film capacitors has been achieved by the application of special segmented nanometer-thick electrodes. The proper design of the electrode segmentation guarantees the best efficiency of the capacitor's self-healing (SH) ability. Meanwhile, the reported theoretical and experimental results have not led to the commonly accepted model of the SH process, since the experimental SH dissipated energy value is several times higher than the calculated one. In this paper, we show that the difference is caused by the heat outflow into polymer film. Based on this, a mathematical model of the metallized electrode destruction is developed. These insights in turn are leading to a better understanding of the SH development. The adequacy of the model is confirmed by both the experiments and the numerical calculations. A procedure of optimal segmented electrode design is offered.

  20. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands. PMID:22020491

  1. Computational/experimental analysis of three low sonic boom configurations with design modifications

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.

    1992-01-01

    The Euler code, designated AIRPLANE, which uses an unstructured tetrahedral mesh was used to compute near-field sonic boom pressure signatures on three modern low sonic boom configurations: the Mach 2, Mach 3, and Haglund models. The TEAM code which uses a multi-zoned structured grid was used to calculate pressure signatures for the Mach 2 model. The computational pressure signatures for the Mach 2 and Mach 3 models are compared with recent experimental data. The computed pressure signatures were extracted at distances less than one body length below the configuration and extrapolated to the experimental distance. The Mach 2 model was found to have larger overpressures off-ground-track than on-ground-track in both computational and experimental results. The correlations with the experiment were acceptable where the signatures were not contaminated by instrumentation and model-support hardware. AIRPLANE was used to study selected modifications to improve the overpressures of the Mach 2 model.

  2. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1987-10-31

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.

  3. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  4. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    NASA Technical Reports Server (NTRS)

    Frickland, P. O.; Repar, J.

    1982-01-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  5. Design and experimental characterization of a nonintrusive measurement system of rotating blade vibration

    SciTech Connect

    Nava, P. ); Paone, N.; Rossi, G.L.; Tomasini, E.P. . Dipt. di Meccanica)

    1994-07-01

    A measurement system for nonintrusive monitoring of rotating blade vibration in turbomachines based on fiber optic sensors is presented. The design of the whole system is discussed; the development of special purpose sensors, their interfacing to the data acquisition system, and the signal processing are outlined.The processing algorithms are tested by software simulation for several possible blade vibrations. Experimental tests performed on different bladed rotors are presented. Results are compared to simultaneous strain gage measurements.

  6. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    NASA Astrophysics Data System (ADS)

    Frickland, P. O.; Repar, J.

    1982-04-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  7. Design and application of FBG strain experimental apparatus in high temperature

    NASA Astrophysics Data System (ADS)

    Xia, Zhongcheng; Liu, Yueming; Gao, Xiaoliang

    2014-09-01

    Fiber Bragg Grating (FBG) sensing technology has many applications, and it's widely used in detection of temperature, strain and etc. Now the application of FBG sensor is limited to the temperature below 200°C owing to the so called High Temperature Erasing Phenomenon. Strain detection over 200°C is still an engineering challenge since high temperature has a bad influence on the sensor, testing equipment and test data, etc, thus effective measurement apparatus are needed to ensure the accuracy of the measurement over 200°C, but there are no suitable FBG strain experimental apparatus in high temperature to date. In this paper a high temperature FBG strain experimental apparatus has been designed to detect the strain in high temperature. In order to verify working condition of the high temperature FBG strain, an application of FBG strain sensing experiment was given in this paper. The high temperature FBG strain sensor was installed in the apparatus, the internal temperature of experimental apparatus was controlled from -20 to 300°C accurately, and strain loading was given by the counterweight, then the data was recorded through electrical resistance strain measurement and optical sensing interrogator. Experimental data result shows that the high temperature FBG strain experimental apparatus can work properly over 200°C. The design of the high temperature FBG strain experimental apparatus are demonstrated suitable for high temperature strain gauges and FBG strain sensors , etc, which can work under the temperature of -20 ~ 300°C, the strain of -1500 ~ +1500μepsilon and the wavelength resolution of 1pm.

  8. Experimental design and analysis for accelerated degradation tests with Li-ion cells.

    SciTech Connect

    Doughty, Daniel Harvey; Thomas, Edward Victor; Jungst, Rudolph George; Roth, Emanuel Peter

    2003-08-01

    This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.

  9. Life on rock. Scaling down biological weathering in a new experimental design at Biosphere-2

    NASA Astrophysics Data System (ADS)

    Zaharescu, D. G.; Dontsova, K.; Burghelea, C. I.; Chorover, J.; Maier, R.; Perdrial, J. N.

    2012-12-01

    Biological colonization and weathering of bedrock on Earth is a major driver of landscape and ecosystem development, its effects reaching out into other major systems such climate and geochemical cycles of elements. In order to understand how microbe-plant-mycorrhizae communities interact with bedrock in the first phases of mineral weathering we developed a novel experimental design in the Desert Biome at Biosphere-2, University of Arizona (U.S.A). This presentation will focus on the development of the experimental setup. Briefly, six enclosed modules were designed to hold 288 experimental columns that will accommodate 4 rock types and 6 biological treatments. Each module is developed on 3 levels. A lower volume, able to withstand the weight of both, rock material and the rest of the structure, accommodates the sampling elements. A middle volume, houses the experimental columns in a dark chamber. A clear, upper section forms the habitat exposed to sunlight. This volume is completely sealed form exterior and it allows a complete control of its air and water parameters. All modules are connected in parallel with a double air purification system that delivers a permanent air flow. This setup is expected to provide a model experiment, able to test important processes in the interaction rock-life at grain-to- molecular scale.

  10. A Resampling Based Approach to Optimal Experimental Design for Computer Analysis of a Complex System

    SciTech Connect

    Rutherford, Brian

    1999-08-04

    The investigation of a complex system is often performed using computer generated response data supplemented by system and component test results where possible. Analysts rely on an efficient use of limited experimental resources to test the physical system, evaluate the models and to assure (to the extent possible) that the models accurately simulate the system order investigation. The general problem considered here is one where only a restricted number of system simulations (or physical tests) can be performed to provide additional data necessary to accomplish the project objectives. The levels of variables used for defining input scenarios, for setting system parameters and for initializing other experimental options must be selected in an efficient way. The use of computer algorithms to support experimental design in complex problems has been a topic of recent research in the areas of statistics and engineering. This paper describes a resampling based approach to form dating this design. An example is provided illustrating in two dimensions how the algorithm works and indicating its potential on larger problems. The results show that the proposed approach has characteristics desirable of an algorithmic approach on the simple examples. Further experimentation is needed to evaluate its performance on larger problems.

  11. Advanced Laboratory at Texas State University: Error Analysis, Experimental Design, and Research Experience for Undergraduates

    NASA Astrophysics Data System (ADS)

    Ventrice, Carl

    2009-04-01

    Physics is an experimental science. In other words, all physical laws are based on experimentally observable phenomena. Therefore, it is important that all physics students have an understanding of the limitations of certain experimental techniques and the associated errors associated with a particular measurement. The students in the Advanced Laboratory class at Texas State perform three detailed laboratory experiments during the semester and give an oral presentation at the end of the semester on a scientific topic of their choosing. The laboratory reports are written in the format of a ``Physical Review'' journal article. The experiments are chosen to give the students a detailed background in error analysis and experimental design. For instance, the first experiment performed in the spring 2009 semester is entitled Measurement of the local acceleration due to gravity in the RFM Technology and Physics Building. The goal of this experiment is to design and construct an instrument that is to be used to measure the local gravitational field in the Physics Building to an accuracy of ±0.005 m/s^2. In addition, at least one of the experiments chosen each semester involves the use of the research facilities within the physics department (e.g., microfabrication clean room, surface science lab, thin films lab, etc.), which gives the students experience working in a research environment.

  12. JEAB Research Over Time: Species Used, Experimental Designs, Statistical Analyses, and Sex of Subjects.

    PubMed

    Zimmermann, Zachary J; Watkins, Erin E; Poling, Alan

    2015-10-01

    We examined the species used as subjects in every article published in the Journal of the Experimental Analysis of Behavior (JEAB) from 1958 through 2013. We also determined the sex of subjects in every article with human subjects (N = 524) and in an equal number of randomly selected articles with nonhuman subjects, as well as the general type of experimental designs used. Finally, the percentage of articles reporting an inferential statistic was determined at 5-year intervals. In all, 35,317 subjects were studied in 3,084 articles; pigeons ranked first and humans second in number used. Within-subject experimental designs were more popular than between-subjects designs regardless of whether human or nonhuman subjects were studied but were used in a higher percentage of articles with nonhumans (75.4 %) than in articles with humans (68.2 %). The percentage of articles reporting an inferential statistic has increased over time, and more than half of the articles published in 2005 and 2010 reported one. Researchers who publish in JEAB frequently depart from Skinner's preferred research strategy, but it is not clear whether such departures are harmful. Finally, the sex of subjects was not reported in a sizable percentage of articles with both human and nonhuman subjects. This is an unfortunate oversight. PMID:27606171

  13. Application of experimental design methodology in development and optimization of drug release method.

    PubMed

    Kincl, M; Turk, S; Vrecer, F

    2005-03-01

    The aim of our research was to apply experimental design methodology in the development and optimization of drug release methods. Diclofenac sodium (2-[(2,6-dichlorophenyl)amino]benzeneacetic acid monosodium salt) was selected as a model drug and Naklofen retard prolonged release tablets, containing 100 mg of diclofenac sodium, were chosen as a model prolonged release system. On the basis of previous results, a three-level three-factorial Box-Behnken experimental design was used to characterize and optimize three physicochemical parameters, i.e. rotation speeds of the stirring elements, pH, and ionic strengths of the dissolution medium, affecting the release of diclofenac sodium from the tablets. The chosen dependent variables (responses) were a cumulative percentage of dissolved diclofenac sodium in 2, 6, 12 and 24 h. For estimation of coefficients in the approximating polynomial function, the least square regression method was applied. Afterwards, the information about the model reliability was verified by using the analysis of variance (ANOVA). The estimation of model factors' significance was performed by Student's t-test. For investigation of the shape of the predicted response surfaces and for model optimization, the canonical analysis was applied. Our study proved that experimental design methodology could efficiently be applied for characterization and optimization of analytical parameters affecting drug release and that it is an economical way of obtaining the maximum amount of information in a short period of time and with the fewest number of experiments. PMID:15707730

  14. Numerical and experimental hydrodynamic analysis of suction cup bio-logging tag designs for marine mammals

    NASA Astrophysics Data System (ADS)

    Murray, Mark; Shorter, Alex; Howle, Laurens; Johnson, Mark; Moore, Michael

    2012-11-01

    The improvement and miniaturization of sensing technologies has made bio-logging tags, utilized for the study of marine mammal behavior, more practical. These sophisticated sensing packages require a housing which protects the electronics from the environment and provides a means of attachment to the animal. The hydrodynamic forces on these housings can inadvertently remove the tag or adversely affect the behavior or energetics of the animal. A modification to the original design of a suction cup bio-logging tag housing was desired to minimize the adverse forces. In this work, hydrodynamic loading of two suction cup tag designs, original and modified designs, were analyzed using computational fluid dynamics (CFD) models and validated experimentally. Overall, the simulation and experimental results demonstrated that a tag housing that minimized geometric disruptions to the flow reduced drag forces, and that a tag housing with a small frontal cross-sectional area close to the attachment surface reduced lift forces. Preliminary results from experimental work with a common dolphin cadaver indicates that the suction cups used to attach the tags to the animal provide sufficient attachment force to resist failure at predicted drag and lift forces in 10 m/s flow.

  15. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  16. Beyond the bucket: testing the effect of experimental design on rate and sequence of decay

    NASA Astrophysics Data System (ADS)

    Gabbott, Sarah; Murdock, Duncan; Purnell, Mark

    2016-04-01

    Experimental decay has revealed the potential for profound biases in our interpretations of exceptionally preserved fossils, with non-random sequences of character loss distorting the position of fossil taxa in phylogenetic trees. By characterising these sequences we can rewind this distortion and make better-informed interpretations of the affinity of enigmatic fossil taxa. Equally, rate of character loss is crucial for estimating the preservation potential of phylogentically informative characters, and revealing the mechanisms of preservation themselves. However, experimental decay has been criticised for poorly modeling 'real' conditions, and dismissed as unsophisticated 'bucket science'. Here we test the effect of a differing experimental parameters on the rate and sequence of decay. By doing so, we can test the assumption that the results of decay experiments are applicable to informing interpretations of exceptionally preserved fossils from diverse preservational settings. The results of our experiments demonstrate the validity of using the sequence of character loss as a phylogenetic tool, and sheds light on the extent to which environment must be considered before making decay-informed interpretations, or reconstructing taphonomic pathways. With careful consideration of experimental design, driven by testable hypotheses, decay experiments are robust and informative - experimental taphonomy needn't kick the bucket just yet.

  17. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  18. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation. PMID:26689874

  19. The Langley Research Center CSI phase-0 evolutionary model testbed-design and experimental results

    NASA Technical Reports Server (NTRS)

    Belvin, W. K.; Horta, Lucas G.; Elliott, K. B.

    1991-01-01

    A testbed for the development of Controls Structures Interaction (CSI) technology is described. The design philosophy, capabilities, and early experimental results are presented to introduce some of the ongoing CSI research at NASA-Langley. The testbed, referred to as the Phase 0 version of the CSI Evolutionary model (CEM), is the first stage of model complexity designed to show the benefits of CSI technology and to identify weaknesses in current capabilities. Early closed loop test results have shown non-model based controllers can provide an order of magnitude increase in damping in the first few flexible vibration modes. Model based controllers for higher performance will need to be robust to model uncertainty as verified by System ID tests. Data are presented that show finite element model predictions of frequency differ from those obtained from tests. Plans are also presented for evolution of the CEM to study integrated controller and structure design as well as multiple payload dynamics.

  20. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak.

    PubMed

    Huang, J; Heidbrink, W W; Wan, B; von Hellermann, M G; Zhu, Y; Gao, W; Wu, C; Li, Y; Fu, J; Lyu, B; Yu, Y; Shi, Y; Ye, M; Hu, L; Hu, C

    2014-11-01

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented. PMID:25430314

  1. A three-phase series-parallel resonant converter -- analysis, design, simulation, and experimental results

    SciTech Connect

    Bhat, A.K.S.; Zheng, R.L.

    1996-07-01

    A three-phase dc-to-dc series-parallel resonant converter is proposed /and its operating modes for a 180{degree} wide gating pulse scheme are explained. A detailed analysis of the converter using a constant current model and the Fourier series approach is presented. Based on the analysis, design curves are obtained and a design example of a 1-kW converter is given. SPICE simulation results for the designed converter and experimental results for a 500-W converter are presented to verify the performance of the proposed converter for varying load conditions. The converter operates in lagging power factor (PF) mode for the entire load range and requires a narrow variation in switching frequency, to adequately regulate the output power.

  2. Randomized block experimental designs can increase the power and reproducibility of laboratory animal experiments.

    PubMed

    Festing, Michael F W

    2014-01-01

    Randomized block experimental designs have been widely used in agricultural and industrial research for many decades. Usually they are more powerful, have higher external validity, are less subject to bias, and produce more reproducible results than the completely randomized designs typically used in research involving laboratory animals. Reproducibility can be further increased by using time as a blocking factor. These benefits can be achieved at no extra cost. A small experiment investigating the effect of an antioxidant on the activity of a liver enzyme in four inbred mouse strains, which had two replications (blocks) separated by a period of two months, illustrates this approach. The widespread failure to use these designs more widely in research involving laboratory animals has probably led to a substantial waste of animals, money, and scientific resources and slowed down the development of new treatments for human and animal diseases. PMID:25541548

  3. Active vibration absorber for CSI evolutionary model: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstration to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility was developed to study practical implementation of new control technologies under realistic conditions. The design is discussed of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. The primary performance objective considered is damping augmentation of the first nine structural modes. Comparison of experimental and predicted closed loop damping is presented, including test and simulation time histories for open and closed loop cases. Although the simulation and test results are not in full agreement, robustness of this design under model uncertainty is demonstrated. The basic advantage of this second order controller design is that the stability of the controller is model independent.

  4. Taguchi approach for co-gasification optimization of torrefied biomass and coal.

    PubMed

    Chen, Wei-Hsin; Chen, Chih-Jung; Hung, Chen-I

    2013-09-01

    This study employs the Taguchi method to approach the optimum co-gasification operation of torrefied biomass (eucalyptus) and coal in an entrained flow gasifier. The cold gas efficiency is adopted as the performance index of co-gasification. The influences of six parameters, namely, the biomass blending ratio, oxygen-to-fuel mass ratio (O/F ratio), biomass torrefaction temperature, gasification pressure, steam-to-fuel mass ratio (S/F ratio), and inlet temperature of the carrier gas, on the performance of co-gasification are considered. The analysis of the signal-to-noise ratio suggests that the O/F ratio is the most important factor in determining the performance and the appropriate O/F ratio is 0.7. The performance is also significantly affected by biomass along with torrefaction, where a torrefaction temperature of 300°C is sufficient to upgrade eucalyptus. According to the recommended operating conditions, the values of cold gas efficiency and carbon conversion at the optimum co-gasification are 80.99% and 94.51%, respectively. PMID:23907063

  5. Optimization of process parameters for drilled hole quality characteristics during cortical bone drilling using Taguchi method.

    PubMed

    Singh, Gurmeet; Jain, Vivek; Gupta, Dheeraj; Ghai, Aman

    2016-09-01

    Orthopaedic surgery involves drilling of bones to get them fixed at their original position. The drilling process used in orthopaedic surgery is most likely to the mechanical drilling process and there is all likelihood that it may harm the already damaged bone, the surrounding bone tissue and nerves, and the peril is not limited at that. It is very much feared that the recovery of that part may be impeded so that it may not be able to sustain life long. To achieve sustainable orthopaedic surgery, a surgeon must try to control the drilling damage at the time of bone drilling. The area around the holes decides the life of bone joint and so, the contiguous area of drilled hole must be intact and retain its properties even after drilling. This study mainly focuses on optimization of drilling parameters like rotational speed, feed rate and the type of tool at three levels each used by Taguchi optimization for surface roughness and material removal rate. The confirmation experiments were also carried out and results found with the confidence interval. Scanning electrode microscopy (SEM) images assisted in getting the micro level information of bone damage. PMID:27254280

  6. Using Taguchi method to optimize differential evolution algorithm parameters to minimize workload smoothness index in SALBP

    NASA Astrophysics Data System (ADS)

    Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.

    2011-06-01

    An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.

  7. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  8. Impact design methods for ceramic components in gas turbine engines

    SciTech Connect

    Song, J.; Cuccio, J.; Kington, H. . Garrett Auxilliary Power Division)

    1993-01-01

    Garrett Auxiliary Power Division of Allied-Signal Aerospace Company is developing methods to design ceramic turbine components with improved impact resistance. In an ongoing research effort under the DOE/NASA-funded Advanced Turbine Technology Applications Project (ATTAP), two different modes of impact damage have been identified and characterized: local damage and structural damage. Local impact damage to Si[sub 3]N[sub 4] impacted by spherical projectiles usually takes the form of ring and/or radial cracks in the vicinity of the impact point. Baseline data from Si[sub 3]N[sub 4] test bars impacted by 1.588-mm (0.0625-in.) diameter NC-132 projectiles indicates the critical velocity at which the probability of detecting surface cracks is 50 percent equaled 130 m/s (426 ft/sec). A microphysics-based model that assumes damage to be in the form of microcracks has been developed to predict local impact damage. Local stress and strain determine microcrack nucleation and propagation, which in turn alter local stress and strain through modulus degradation. Material damage is quantified by a damage parameter related to the volume fraction of microcracks. The entire computation has been incorporated into the EPIC computer code. Model capability is being demonstrated by simulating instrumented plate impact and particle impact tests. Structural impact damage usually occurs in the form of fast fracture caused by bending stresses that exceed the material strength. The EPIC code has been successfully used to predict radial and axial blade failures from impacts by various size particles. This method is also being used in conjunction with Taguchi experimental methods to investigate the effects of design parameters on turbine blade impact resistance. It has been shown that significant improvement in impact resistance can be achieved by using the configuration recommended by Taguchi methods.

  9. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  10. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  11. Facility for Advanced Accelerator Experimental Tests at SLAC (FACET) Conceptual Design Report

    SciTech Connect

    Amann, J.; Bane, K.; /SLAC

    2009-10-30

    This Conceptual Design Report (CDR) describes the design of FACET. It will be updated to stay current with the developing design of the facility. This CDR begins as the baseline conceptual design and will evolve into an 'as-built' manual for the completed facility. The Executive Summary, Chapter 1, gives an introduction to the FACET project and describes the salient features of its design. Chapter 2 gives an overview of FACET. It describes the general parameters of the machine and the basic approaches to implementation. The FACET project does not include the implementation of specific scientific experiments either for plasma wake-field acceleration for other applications. Nonetheless, enough work has been done to define potential experiments to assure that the facility can meet the requirements of the experimental community. Chapter 3, Scientific Case, describes the planned plasma wakefield and other experiments. Chapter 4, Technical Description of FACET, describes the parameters and design of all technical systems of FACET. FACET uses the first two thirds of the existing SLAC linac to accelerate the beam to about 20GeV, and compress it with the aid of two chicanes, located in Sector 10 and Sector 20. The Sector 20 area will include a focusing system, the generic experimental area and the beam dump. Chapter 5, Management of Scientific Program, describes the management of the scientific program at FACET. Chapter 6, Environment, Safety and Health and Quality Assurance, describes the existing programs at SLAC and their application to the FACET project. It includes a preliminary analysis of safety hazards and the planned mitigation. Chapter 7, Work Breakdown Structure, describes the structure used for developing the cost estimates, which will also be used to manage the project. The chapter defines the scope of work of each element down to level 3.

  12. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  13. Design Considerations and Experimental Verification of a Rail Brake Armature Based on Linear Induction Motor Technology

    NASA Astrophysics Data System (ADS)

    Sakamoto, Yasuaki; Kashiwagi, Takayuki; Hasegawa, Hitoshi; Sasakawa, Takashi; Fujii, Nobuo

    This paper describes the design considerations and experimental verification of an LIM rail brake armature. In order to generate power and maximize the braking force density despite the limited area between the armature and the rail and the limited space available for installation, we studied a design method that is suitable for designing an LIM rail brake armature; we considered adoption of a ring winding structure. To examine the validity of the proposed design method, we developed a prototype ring winding armature for the rail brakes and examined its electromagnetic characteristics in a dynamic test system with roller rigs. By repeating various tests, we confirmed that unnecessary magnetic field components, which were expected to be present under high speed running condition or when a ring winding armature was used, were not present. Further, the necessary magnetic field component and braking force attained the desired values. These studies have helped us to develop a basic design method that is suitable for designing the LIM rail brake armatures.

  14. A design and experimental verification methodology for an energy harvester skin structure

    NASA Astrophysics Data System (ADS)

    Lee, Soobum; Youn, Byeng D.

    2011-05-01

    This paper presents a design and experimental verification methodology for energy harvesting (EH) skin, which opens up a practical and compact piezoelectric energy harvesting concept. In the past, EH research has primarily focused on the design improvement of a cantilever-type EH device. However, such EH devices require additional space for proof mass and fixture and sometimes result in significant energy loss as the clamping condition becomes loose. Unlike the cantilever-type device, the proposed design is simply implemented by laminating a thin piezoelectric patch onto a vibrating structure. The design methodology proposed, which determines a highly efficient piezoelectric material distribution, is composed of two tasks: (i) topology optimization and (ii) shape optimization of the EH material. An outdoor condensing unit is chosen as a case study among many engineered systems with harmonic vibrating configuration. The proposed design methodology determined an optimal PZT material configuration on the outdoor unit skin structure. The designed EH skin was carefully prototyped to demonstrate that it can generate power up to 3.7 mW, which is sustainable for operating wireless sensor units for structural health monitoring and/or building automation.

  15. Use of experimental data in testing methods for design against uncertainty

    NASA Astrophysics Data System (ADS)

    Rosca, Raluca Ioana

    Modern methods of design take into consideration the fact that uncertainty is present in everyday life, whether in the form of variable loads (the strongest wind that would affect a building), material properties of an alloy, or future demand for the product or cost of labor. Moreover, the Japanese example showed that it may be more cost-effective to design taking into account the existence of the uncertainty rather than to plan to eliminate or greatly reduce it. The dissertation starts by comparing the theoretical basis of two methods for design against uncertainty, namely probability theory and possibility theory. A two-variable design problem is then used to show the differences. It is concluded that for design problems with two or more cases of failure of very different magnitude (as the stop of a car due to lack of gas or motor failure), probability theory divides existent resources in a more intuitive way than possibility theory. The dissertation continues with the description of simple experiments (building towers of dominoes) and then it presents the methodology to increase the amount of information that can be drawn from a given data set. The methodology is shown on the Bidder-Challenger problem, a simulation of a problem of a company that makes microchips to set a target speed for its next microchip. The simulations use the domino experimental data. It is demonstrated that important insights into methods of probability and possibility based design can be gained from experiments.

  16. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  17. Introducing Third-Year Chemistry Students to the Planning and Design of an Experimental Program

    NASA Astrophysics Data System (ADS)

    Dunn, Jeffrey G.; Phillips, David Norman; van Bronswijk, Wilhelm

    1997-10-01

    The design and planning of an experimental program is often an important aspect of the job description of recent graduate employees in chemical industry and time should therefore be devoted to this activity in an undergraduate course. This paper describes a pencil and paper activity which involves the design and planning of an experimental programme which may lead to the solution of the problem. These skills are an essential pre-requisite to any experimental activity. We provide the students with a list of problems similar to those that a new graduate could encounter on commencing employment in chemical industry. They are real problems, which the Inorganic Chemistry staff of the School have been previously asked to solve for local industry. A staff member acts as the "client", and the students is the "consultant". The aim is that by a series of interviews between the client and the consultant, the students can refine a vague problem statement into a quantitative statement, and then from this develop a proposal to investigate the problem in order to confirm the cause. This proposal is submitted to the client for assessment. The students are expected to arrange one meeting with the supervisor in each week. This activity is highly commended by the School of Applied Chemistry's Advisory Board, which is primarily comprised of industrial chemists.

  18. Design and Experimental Demonstration of Cherenkov Radiation Source Based on Metallic Photonic Crystal Slow Wave Structure

    NASA Astrophysics Data System (ADS)

    Fu, Tao; Yang, Zi-Qiang; Ouyang, Zheng-Biao

    2016-06-01

    This paper presents a kind of Cherenkov radiation source based on metallic photonic crystal (MPC) slow-wave structure (SWS) cavity. The Cherenkov source designed by linear theory works at 34.7 GHz when the cathode voltage is 550 kV. The three-dimensional particle-in-cell (PIC) simulation of the SWS shows the operating frequency of 35.56 GHz with a single TM01 mode is basically consistent with the theoretically one under the same parameters. An experiment was implemented to testify the results of theory and PIC simulation. The experimental system includes a cathode emitting unit, the SWS, a magnetic system, an output antenna, and detectors. Experimental results show that the operating frequency through detecting the retarded time of wave propagation in waveguides is around 35.5 GHz with a single TM01 mode and an output power reaching 54 MW. It indicates that the MPC structure can reduce mode competition. The purpose of the paper is to show in theory and in preliminary experiment that a SWS with PBG can produce microwaves in TM01 mode. But it still provides a good experimental and theoretical foundation for designing high-power microwave devices.

  19. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    SciTech Connect

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  20. Issues and recent advances in optimal experimental design for site investigation (Invited)

    NASA Astrophysics Data System (ADS)

    Nowak, W.

    2013-12-01

    This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction

  1. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    NASA Astrophysics Data System (ADS)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  2. Pressure-Flow Experimental Performance of New Intravascular Blood Pump Designs for Fontan Patients.

    PubMed

    Chopski, Steven G; Fox, Carson S; Riddle, Michelle L; McKenna, Kelli L; Patel, Jay P; Rozolis, John T; Throckmorton, Amy L

    2016-03-01

    An intravascular axial flow pump is being developed as a mechanical cavopulmonary assist device for adolescent and adult patients with dysfunctional Fontan physiology. Coupling computational modeling with experimental evaluation of prototypic designs, this study examined the hydraulic performance of 11 impeller prototypes with blade stagger or twist angles varying from 100 to 600 degrees. A refined range of twisted blade angles between 300 and 400 degrees with 20-degree increments was then selected, and four additional geometries were constructed and hydraulically evaluated. The prototypes met performance expectations and produced 3-31 mm Hg for flow rates of 1-5 L/min for 6000-8000 rpm. A regression analysis was completed with all characteristic coefficients contributing significantly (P < 0.0001). This analysis revealed that the impeller with 400 degrees of blade twist outperformed the other designs. The findings of the numerical model for 300-degree twisted case and the experimental results deviated within approximately 20%. In an effort to simplify the impeller geometry, this work advanced the design of this intravascular cavopulmonary assist device closer to preclinical animal testing. PMID:26333131

  3. Design and experimental characterization of a NiTi-based, high-frequency, centripetal peristaltic actuator

    NASA Astrophysics Data System (ADS)

    Borlandelli, E.; Scarselli, D.; Nespoli, A.; Rigamonti, D.; Bettini, P.; Morandini, M.; Villa, E.; Sala, G.; Quadrio, M.

    2015-03-01

    Development and experimental testing of a peristaltic device actuated by a single shape-memory NiTi wire are described. The actuator is designed to radially shrink a compliant silicone pipe, and must work on a sustained basis at an actuation frequency that is higher than those typical of NiTi actuators. Four rigid, aluminum-made circular sectors are sitting along the pipe circumference and provide the required NiTi wire housing. The aluminum assembly acts as geometrical amplifier of the wire contraction and as heat sink required to dissipate the thermal energy of the wire during the cooling phase. We present and discuss the full experimental investigation of the actuator performance, measured in terms of its ability to reduce the pipe diameter, at a sustained frequency of 1.5 Hz. Moreover, we investigate how the diameter contraction is affected by various design parameters as well as actuation frequencies up to 4 Hz. We manage to make the NiTi wire work at 3% in strain, cyclically providing the designed pipe wall displacement. The actuator performance is found to decay approximately linearly with actuation frequencies up to 4 Hz. Also, the interface between the wire and the aluminum parts is found to be essential in defining the functional performance of the actuator.

  4. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  5. Clinical outcome research in complementary and alternative medicine: an overview of experimental design and analysis.

    PubMed

    Gatchel, R J; Maddrey, A M

    1998-09-01

    This article serves as a primer for those beginning clinical research in complementary and alternative medicine. The authors provide a basic overview of important experimental design and statistical issues, of which clinical researchers in the area of complementary and alternative medicine must be aware when attempting to demonstrate the effectiveness of particular treatment modalities. As the article suggests, science is an inferential process, and experimental investigations can vary greatly in methodological integrity. Key concepts in clinical outcome research such as internal validity, statistical conclusion validity, and the appropriate measurement and operational definitions of outcomes are discussed. New scientific approaches that are evolving because of paradigm shifts in science (e.g., chaos theory) are also reviewed. Suggestions are provided to further develop an understanding of clinical outcome research methodology. PMID:9737030

  6. Design, Simulation and Experimental Characteristics of Hydrogel-based Piezoresistive pH Sensors

    NASA Astrophysics Data System (ADS)

    Trinh, Thong Quang; Sorber, Jorge; Gerlach, Gerald

    This paper presents the investigations of a novel type of piezoresistive pH sensors exploiting the chemo-mechanical energy conversion due to hydrogel swelling. pH-sensitive poly(vinyl alcohol)-poly(acrylic acid) (PVA-PAA) hydrogel is used for this aim. The pH sensor has been designed including a commercial piezoresistive pressure sensor chip, a hydrogel layer, and a rigid grid. Behaviour of pH sensor under swelling of polymer hydrogel has been simulated using finite element method (ANSYS). The sensor simulations have been performed using the experimental material parameters of PVA-PAA hydrogel. The sensor characteristics including the silicon diaphragm deflection and output voltage have been measured. There were good relative agreements between simulations and experimental results.

  7. The balloon experimental twin telescope for infrared interferometry (BETTII): optical design

    NASA Astrophysics Data System (ADS)

    Veach, Todd J.; Rinehart, Stephen A.; Mentzell, John E.; Silverberg, Robert F.; Fixsen, Dale J.; Rizzo, Maxime J.; Dhabal, Arnab; Gibbons, Caitlin E.; Benford, Dominic J.

    2014-07-01

    Here we present the optical and limited cryogenic design for The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII), an 8-meter far-infrared interferometer designed to fly on a high-altitude scientific balloon. The optical design is separated into warm and cold optics with the cold optics further separated into the far-infrared (FIR) (30-90 microns) and near-infrared (NIR) (1-3 microns). The warm optics are comprised of the twin siderostats, twin telescopes, K-mirror, and warm delay line. The cold optics are comprised of the cold delay line and the transfer optics to the FIR science detector array and the NIR steering array. The field of view of the interferometer is 2', with a wavelength range of 30-90 microns, 0.5" spectral resolution at 40 microns, R~200 spectral resolution, and 1.5" pointing stability. We also present the design of the cryogenic system necessary for operation of the NIR and FIR detectors. The cryogenic system consists of a `Buffered He-7' type cryogenic cooler providing a cold stage base temperature of < 280mK and 10 micro-Watts of heat lift and a custom in-house designed dewar that nominally provides sufficient hold time for the duration of the BETTII flight (24 hours).

  8. Experimental design in caecilian systematics: phylogenetic information of mitochondrial genomes and nuclear rag1.

    PubMed

    San Mauro, Diego; Gower, David J; Massingham, Tim; Wilkinson, Mark; Zardoya, Rafael; Cotton, James A

    2009-08-01

    In molecular phylogenetic studies, a major aspect of experimental design concerns the choice of markers and taxa. Although previous studies have investigated the phylogenetic performance of different genes and the effectiveness of increasing taxon sampling, their conclusions are partly contradictory, probably because they are highly context specific and dependent on the group of organisms used in each study. Goldman introduced a method for experimental design in phylogenetics based on the expected information to be gained that has barely been used in practice. Here we use this method to explore the phylogenetic utility of mitochondrial (mt) genes, mt genomes, and nuclear rag1 for studies of the systematics of caecilian amphibians, as well as the effect of taxon addition on the stabilization of a controversial branch of the tree. Overall phylogenetic information estimates per gene, specific estimates per branch of the tree, estimates for combined (mitogenomic) data sets, and estimates as a hypothetical new taxon is added to different parts of the caecilian tree are calculated and compared. In general, the most informative data sets are those for mt transfer and ribosomal RNA genes. Our results also show at which positions in the caecilian tree the addition of taxa have the greatest potential to increase phylogenetic information with respect to the controversial relationships of Scolecomorphus, Boulengerula, and all other teresomatan caecilians. These positions are, as intuitively expected, mostly (but not all) adjacent to the controversial branch. Generating whole mitogenomic and rag1 data for additional taxa joining the Scolecomorphus branch may be a more efficient strategy than sequencing a similar amount of additional nucleotides spread across the current caecilian taxon sampling. The methodology employed in this study allows an a priori evaluation and testable predictions of the appropriateness of particular experimental designs to solve specific questions at

  9. Design considerations for ITER (International Thermonuclear Experimental Reactor) magnet systems: Revision 1

    SciTech Connect

    Henning, C.D.; Miller, J.R.

    1988-10-09

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnet systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs.

  10. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  11. Design and experimental investigations on a small scale traveling wave thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Chen, M.; Ju, Y. L.

    2013-02-01

    A small scale traveling wave or Stirling thermoacoustic engine with a resonator of only 1 m length was designed, constructed and tested by using nitrogen as working gas. The small heat engine achieved a steady working frequency of 45 Hz. The pressure ratio reached 1.189, with an average charge pressure of 0.53 MPa and a heating power of 1.14 kW. The temperature and the pressure characteristics during the onset and damping processes were also observed and discussed. The experimental results demonstrated that the small engine possessed the potential to drive a Stirling-type pulse tube cryocooler.

  12. Design and experimental verification of a cascade traveling-wave thermoacoustic amplifier

    NASA Astrophysics Data System (ADS)

    Senga, Mariko; Hasegawa, Shinya

    2016-05-01

    In this study, we designed and built a prototype of a cascade thermoacoustic amplifier that connects multiple regenerators while satisfying high acoustic impedance at all regenerator positions. To connect the regenerators, the transfer matrix for each component unit, consisting of a regenerator and the adjacent resonators, was analyzed and the eigenvector and the eigenvalue were numerically determined. The experimental results showed that an acoustic power amplification higher than 100 was achieved by the cascade connection of eight regenerators. Furthermore, a high acoustic impedance of approximately six times higher than that of a free-traveling plane wave was realized at all regenerator positions.

  13. Experimental Design for CMIP6: Aerosol, Land Use, and Future Scenarios Final Report

    SciTech Connect

    Arnott, James

    2015-10-30

    The Aspen Global Change Institute hosted a technical science workshop entitled, “Experimental design for CMIP6: Aerosol, Land Use, and Future Scenarios,” on August 3-8, 2014 in Aspen, CO. Claudia Tebaldi (NCAR) and Brian O’Neill (NCAR) served as co-chairs for the workshop. The Organizing committee also included Dave Lawrence (NCAR), Jean-Francois Lamarque (NCAR), George Hurtt (University of Maryland), & Detlef van Vuuren (PBL Netherlands Environmental Change). The meeting included the participation of 22 scientists representing many of the major climate modeling centers for a total of 110 participant days.

  14. Aerodynamic Design of Axial-flow Compressors. VI - Experimental Flow in Two-Dimensional Cascades

    NASA Technical Reports Server (NTRS)

    Lieblein, Seymour

    1955-01-01

    Available experimental two-dimensional cascade data for conventional compressor blade sections are correlated at a reference incidence angle in the region of minimum loss. Variations of reference incidence angle, total-pressure loss, and deviation angle with cascade geometry, inlet Mach number, and Reynolds number are investigated. From the analysis and the correlations of the available data, rules and relations are evolved for the prediction of blade-profile performance. These relations are developed in simplified forms readily applicable to compressor design procedures.

  15. Design and experimental investigation of an ejector in an air-conditioning and refrigeration system

    SciTech Connect

    AL-Khalidy, N.; Zayonia, A.

    1995-12-31

    This paper discusses the conservation of energy in a refrigerant ejector refrigerating machine using heat driven from the concentrator collectors. The working refrigerant was R-113. The design of an ejector operating in an air-conditioning and refrigerating system with a low thermal source (70 C to 100 C) is presented. The influence of three major parameters--boiler, condenser, and evaporator temperature--on ejector efficiency is discussed. Experimental results show that the condenser temperature is the major influence at a low evaporator temperature. The maximum ejector efficiency was 31%.

  16. Optimization of polyvinylidene fluoride (PVDF) membrane fabrication for protein binding using statistical experimental design.

    PubMed

    Ahmad, A L; Ideris, N; Ooi, B S; Low, S C; Ismail, A

    2016-01-01

    Statistical experimental design was employed to optimize the preparation conditions of polyvinylidenefluoride (PVDF) membranes. Three variables considered were polymer concentration, dissolving temperature, and casting thickness, whereby the response variable was membrane-protein binding. The optimum preparation for the PVDF membrane was a polymer concentration of 16.55 wt%, a dissolving temperature of 27.5°C, and a casting thickness of 450 µm. The statistical model exhibits a deviation between the predicted and actual responses of less than 5%. Further characterization of the formed PVDF membrane showed that the morphology of the membrane was in line with the membrane-protein binding performance. PMID:27088961

  17. Conceptual design of experimental equipment for large-diameter NTD-Si.

    PubMed

    Yagi, M; Watanabe, M; Ohyama, K; Yamamoto, K; Komeda, M; Kashima, Y; Yamashita, K

    2009-01-01

    An irradiation-experimental equipment for 12in neutron transmutation doping silicon (NTD-Si) was designed conceptually by using MCNP5 in order to improve the neutron flux distribution of the radial direction. As a result of the calculations, the neutron absorption reaction ratio of the circumference to the center could be limited within 1.09 using a thermal neutron filter that covers the surface of the silicon ingot. The uniformity of the (30)Si neutron absorption was less than 5.3%. PMID:19299158

  18. Quiet Clean Short-Haul Experimental Engine (QCSEE): Acoustic treatment development and design

    NASA Technical Reports Server (NTRS)

    Clemons, A.

    1979-01-01

    Acoustic treatment designs for the quiet clean short-haul experimental engines are defined. The procedures used in the development of each noise-source suppressor device are presented and discussed in detail. A complete description of all treatment concepts considered and the test facilities utilized in obtaining background data used in treatment development are also described. Additional supporting investigations that are complementary to the treatment development work are presented. The expected suppression results for each treatment configuration are given in terms of delta SPL versus frequency and in terms of delta PNdB.

  19. Selective transcriptional regulation by Myc: Experimental design and computational analysis of high-throughput sequencing data

    PubMed Central

    Pelizzola, Mattia; Morelli, Marco J.; Sabò, Arianna; Kress, Theresia R.; de Pretis, Stefano; Amati, Bruno

    2015-01-01

    The gene expression programs regulated by the Myc transcription factor were evaluated by integrated genome-wide profiling of Myc binding sites, chromatin marks and RNA expression in several biological models. Our results indicate that Myc directly drives selective transcriptional regulation, which in certain physiological conditions may indirectly lead to RNA amplification. Here, we illustrate in detail the experimental design concerning the high-throughput sequencing data associated with our study (Sabò et al., Nature. (2014) 511:488–492) and the R scripts used for their computational analysis. PMID:26217715

  20. Inlet Flow Test Calibration for a Small Axial Compressor Facility. Part 1: Design and Experimental Results

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Prahst, P. S.

    1994-01-01

    An axial compressor test rig has been designed for the operation of small turbomachines. The inlet region consisted of a long flowpath region with two series of support struts and a flapped inlet guide vane. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the inlet for a highly loaded two-stage axial compressor test. Several flow conditions and IGV angle settings were established in which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. A detailed discussion of the flowpath design along with a summary of the experimental results are provided in Part 1.

  1. Design and Experimental Performance of a Two Stage Partial Admission Turbine, Task B.1/B.4

    NASA Technical Reports Server (NTRS)

    Sutton, R. F.; Boynton, J. L.; Akian, R. A.; Shea, Dan; Roschak, Edmund; Rojas, Lou; Orr, Linsey; Davis, Linda; King, Brad; Bubel, Bill

    1992-01-01

    A three-inch mean diameter, two-stage turbine with partial admission in each stage was experimentally investigated over a range of admissions and angular orientations of admission arcs. Three configurations were tested in which first stage admission varied from 37.4 percent (10 of 29 passages open, 5 per side) to 6.9 percent (2 open, 1 per side). Corresponding second stage admissions were 45.2 percent (14 of 31 passages open, 7 per side) and 12.9 percent (4 open, 2 per side). Angular positions of the second stage admission arcs with respect to the first stage varied over a range of 70 degrees. Design and off-design efficiency and flow characteristics for the three configurations are presented. The results indicated that peak efficiency and the corresponding isentropic velocity ratio decreased as the arcs of admission were decreased. Both efficiency and flow characteristics were sensitive to the second stage nozzle orientation angles.

  2. Experimental measurement of human head motion for high-resolution computed tomography system design

    NASA Astrophysics Data System (ADS)

    Li, Liang; Chen, Zhiqiang; Jin, Xin; Yu, Hengyong; Wang, Ge

    2010-06-01

    Human head motion has been experimentally measured for high-resolution computed tomography (CT) design using a Canon digital camera. Our goal is to identify the minimal movements of the human head under ideal conditions without rigid fixation. In our experiments, all the 19 healthy volunteers were lying down with strict self-control. All of them were asked to be calm without pressures. Our results showed that the mean absolute value of the measured translation excursion was about 0.35 mm, which was much less than the measurements on real patients. Furthermore, the head motions in different directions were correlated. These results are useful for the design of the new instant CT system for in vivo high-resolution imaging (about 40 μm).

  3. Design of charge exchange recombination spectroscopy for the joint Texas experimental tokamak.

    PubMed

    Chi, Y; Zhuang, G; Cheng, Z F; Hou, S Y; Cheng, C; Li, Z; Wang, J R; Wang, Z J

    2014-11-01

    The old diagnostic neutral beam injector first operated at the University of Texas at Austin is ready for rejoining the joint Texas experimental tokamak (J-TEXT). A new set of high voltage power supplies has been equipped and there is no limitation for beam modulation or beam pulse duration henceforth. Based on the spectra of fully striped impurity ions induced by the diagnostic beam the design work for toroidal charge exchange recombination spectroscopy (CXRS) system is presented. The 529 nm carbon VI (n = 8 - 7 transition) line seems to be the best choice for ion temperature and plasma rotation measurements and the considered hardware is listed. The design work of the toroidal CXRS system is guided by essential simulation of expected spectral results under the J-TEXT tokamak operation conditions. PMID:25430328

  4. Conceptual design study of Fusion Experimental Reactor (FY86 FER): Safety

    NASA Astrophysics Data System (ADS)

    Seki, Yasushi; Iida, Hiromasa; Honda, Tsutomu

    1987-08-01

    This report describes the study on safety for FER (Fusion Experimental Reactor) which has been designed as a next step machine to the JT-60. Though the final purpose of this study is to have an image of design base accident, maximum credible accident and to assess their risk or probability, etc., as FER plant system, the emphasis of this years study is placed on fuel-gas circulation system where the tritium inventory is maximum. The report consists of two chapters. The first chapter summarizes the FER system and describes FMEA (Failure Mode and Effect Analysis) and related accident progression sequence for FER plant system as a whole. The second chapter of this report is focused on fuel-gas circulation system including purification, isotope separation and storage. Probability of risk is assessed by the probabilistic risk analysis (PRA) procedure based on FMEA, ETA and FTA.

  5. Mechanical design of experimental apparatus for FIREX cryo-target cooling

    NASA Astrophysics Data System (ADS)

    Iwamoto, A.; Norimatsu, T.; Nakai, M.; Sakagami, H.; Fujioka, S.; Shiraga, H.; Azechi, H.

    2016-05-01

    Mechanical design of an experimental apparatus for FIREX cryo-target cooling is described. Gaseous helium (GHe) sealing system at a cryogenic environment is an important issue for laser fusion experiments. The dedicated loading system was designed for a metal gasket. We take U-TIGHTSEAL® (Usui Kokusai Sangyo Kaisha. Ltd.) with an indium plated copper jacket as an example. According to its specification, a linear load of 110 N/m along its circumference is the optimum compression; however a lower load would still maintain helium (He) leak below the required level. Its sealing performance was investigated systematically. Our system demanded 27 N/mm of the load to keep He leak tightness in a cryogenic environment. Once leak tightness was obtained, it could be reduced to 9.5 N/mm.

  6. Design of charge exchange recombination spectroscopy for the joint Texas experimental tokamak

    SciTech Connect

    Chi, Y.; Zhuang, G. Cheng, Z. F.; Hou, S. Y.; Cheng, C.; Li, Z.; Wang, J. R.; Wang, Z. J.

    2014-11-15

    The old diagnostic neutral beam injector first operated at the University of Texas at Austin is ready for rejoining the joint Texas experimental tokamak (J-TEXT). A new set of high voltage power supplies has been equipped and there is no limitation for beam modulation or beam pulse duration henceforth. Based on the spectra of fully striped impurity ions induced by the diagnostic beam the design work for toroidal charge exchange recombination spectroscopy (CXRS) system is presented. The 529 nm carbon VI (n = 8 − 7 transition) line seems to be the best choice for ion temperature and plasma rotation measurements and the considered hardware is listed. The design work of the toroidal CXRS system is guided by essential simulation of expected spectral results under the J-TEXT tokamak operation conditions.

  7. Development and design of a multi-column experimental setup for Kr/Xe separation

    SciTech Connect

    Garn, Troy G.; Greenhalgh, Mitchell; Watson, Tony

    2014-12-01

    As a precursor to FY-15 Kr/Xe separation testing, design modifications to an existing experimental setup are warranted. The modifications would allow for multi-column testing to facilitate a Xe separation followed by a Kr separation using engineered form sorbents prepared using an INL patented process. A new cooling apparatus capable of achieving test temperatures to -40° C and able to house a newly designed Xe column was acquired. Modifications to the existing setup are being installed to allow for multi-column testing and gas constituent analyses using evacuated sample bombs. The new modifications will allow for independent temperature control for each column enabling a plethora of test conditions to be implemented. Sample analyses will be used to evaluate the Xe/Kr selectivity of the AgZ-PAN sorbent and determine the Kr purity of the effluent stream following Kr capture using the HZ-PAN sorbent.

  8. Experimental Validation of the Optimum Design of an Automotive Air-to-Air Thermoelectric Air Conditioner (TEAC)

    NASA Astrophysics Data System (ADS)

    Attar, Alaa; Lee, HoSung; Weera, Sean

    2015-06-01

    The optimization of thermoelectric air conditioners (TEAC) has been a challenging topic due to the multitude of variables that must be considered. The present work discusses an experimental validation of the optimum design for an automotive air-to-air TEAC. The TEAC optimum design was obtained by using a new optimal design method with dimensional analysis that has been recently developed. The design constraints were obtained through a previous analytical study on the same topic. To simplify the problem, a unit cell representing the entire TEAC system was analytically simulated and experimentally tested. Moreover, commercial TEC modules and heat sinks were selected and tested based on the analytical optimum design results.

  9. Flowing lead spallation target design for use in an ADTT experimental facility located at LAMPF

    SciTech Connect

    Beard, C.A.; Bracht, R.R.; Buksa, J.J.

    1994-08-01

    A conceptual design has been initiated for a flowing lead spallation target for use in an ADTT experimental facility located at LAMPF. The lead is contained using Nb-1Zr as the structural material. This material was selected based on its favorable material properties as well as its compatibility with the flowing lead. Heat deposited in the lead and the Nb-1Zr container by the 800-MeV, 1-mA beam is removed by the flowing lead and transferred to helium via a conventional heat exchanger. The neutronic, thermal hydraulic, and stress characteristics of the system have been determined. In addition, a module to control the thaw and freeze of the lead has been developed and incorporated into the target system design. The entire primary target system (spallation target, thaw/freeze system, and intermediate heat exchanger) has been designed to be built as a contained module to allow easy insertion into an experimental ADTT blanket assembly and to provide multiple levels of containment for the lead. For the 800-MeV LAMPF beam, the target delivers a source of approximately 18 neutrons/proton. A total of 540 kW are deposited in the target. The lead temperature ranges from 400 to 500 C. The peak structural heating occurs at the beam interface, and the target is designed to maximize cooling at this point. An innovative thin-window structure has been incorporated that allows direct, convective cooling of the window by the inlet flowing lead. Safe, and reliable operation of the target has been maximized through simple, robust engineering

  10. Experimental Testing of Rockfall Barriers Designed for the Low Range of Impact Energy

    NASA Astrophysics Data System (ADS)

    Buzzi, O.; Spadari, M.; Giacomini, A.; Fityus, S.; Sloan, S. W.

    2013-07-01

    Most of the recent research on rockfall and the development of protective systems, such as flexible rockfall barriers, have been focused on medium to high levels of impacting energy. However, in many regions of the world, the rockfall hazard involves low levels of energy. This is particularly the case in New South Wales, Australia, because of the nature of the geological environments. The state Road and Traffic Authority (RTA) has designed various types of rockfall barriers, including some of low capacity, i.e. 35 kJ. The latter were tested indoors using a pendulum equipped with an automatic block release mechanism triggered by an optical beam. Another three systems were also tested, including two products designed by rockfall specialised companies and one modification of the initial design of the RTA. The research focused on the influence of the system's stiffness on the transmission of load to components of the barrier such as posts and cables. Not surprisingly, the more compliant the system, the less loaded the cables and posts. It was also found that removing the intermediate cables and placing the mesh downslope could reduce the stiffness of the system designed by the RTA. The paper concludes with some multi-scale considerations on the capacity of a barrier to absorb the energy based on experimental evidence.

  11. Design and Development of a Composite Dome for Experimental Characterization of Material Permeability

    NASA Technical Reports Server (NTRS)

    Estrada, Hector; Smeltzer, Stanley S., III

    1999-01-01

    This paper presents the design and development of a carbon fiber reinforced plastic dome, including a description of the dome fabrication, method for sealing penetrations in the dome, and a summary of the planned test series. This dome will be used for the experimental permeability characterization and leakage validation of composite vessels pressurized using liquid hydrogen and liquid nitrogen at the Cryostat Test Facility at the NASA Marshall Space Flight Center (MSFC). The preliminary design of the dome was completed using membrane shell analysis. Due to the configuration of the test setup, the dome will experience some flexural stresses and stress concentrations in addition to membrane stresses. Also, a potential buckling condition exists for the dome due to external pressure during the leak testing of the cryostat facility lines. Thus, a finite element analysis was conducted to assess the overall strength and stability of the dome for each required test condition. Based on these results, additional plies of composite reinforcement material were applied to local regions on the dome to alleviate stress concentrations and limit deflections. The dome design includes a circular opening in the center for the installation of a polar boss, which introduces a geometric discontinuity that causes high stresses in the region near the hole. To attenuate these high stresses, a reinforcement system was designed using analytical and finite element analyses. The development of a low leakage polar boss system is also investigated.

  12. On the Estimation of Process Parameters in the Taguchi's Approach to the On-line Control Procedure for Attributes

    NASA Astrophysics Data System (ADS)

    Borges, Wagner S.; Esteves, Luis Gustavo; Wechsler, Sergio

    2008-11-01

    Under the model proposed by Nayebpour and Woodall [5] for Taguchi's on-line control procedure for attributes, estimators for the process parameter vector are derived both from the Classical (maximum likelihood) and Bayesian standpoints. The likelihood function is generated by the detection time of the first defective item under the control procedure. Under the Classical standpoint, a case of nonidentifiability is disclosed. Under the Bayesian standpoint, posterior probability distributions for the process parameters are determined by taking into account independent beta prior distributions.

  13. Design and Computational/Experimental Analysis of Low Sonic Boom Configurations

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.

    1999-01-01

    Recent studies have shown that inviscid CFD codes combined with a planar extrapolation method give accurate sonic boom pressure signatures at distances greater than one body length from supersonic configurations if either adapted grids swept at the approximate Mach angle or very dense non-adapted grids are used. The validation of CFD for computing sonic boom pressure signatures provided the confidence needed to undertake the design of new supersonic transport configurations with low sonic boom characteristics. An aircraft synthesis code in combination with CFD and an extrapolation method were used to close the design. The principal configuration of this study is designated LBWT (Low Boom Wing Tail) and has a highly swept cranked arrow wing with conventional tails, and was designed to accommodate either 3 or 4 engines. The complete configuration including nacelles and boundary layer diverters was evaluated using the AIRPLANE code. This computer program solves the Euler equations on an unstructured tetrahedral mesh. Computations and wind tunnel data for the LBWT and two other low boom configurations designed at NASA Ames Research Center are presented. The two additional configurations are included to provide a basis for comparing the performance and sonic boom level of the LBWT with contemporary low boom designs and to give a broader experiment/CFD correlation study. The computational pressure signatures for the three configurations are contrasted with on-ground-track near-field experimental data from the NASA Ames 9x7 Foot Supersonic Wind Tunnel. Computed pressure signatures for the LBWT are also compared with experiment at approximately 15 degrees off ground track.

  14. Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.

    2011-01-01

    The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.

  15. Biocompatible Nanoemulsions for Improved Aceclofenac Skin Delivery: Formulation Approach Using Combined Mixture-Process Experimental Design.

    PubMed

    Isailović, Tanja; Ðorđević, Sanela; Marković, Bojan; Ranđelović, Danijela; Cekić, Nebojša; Lukić, Milica; Pantelić, Ivana; Daniels, Rolf; Savić, Snežana

    2016-01-01

    We aimed to develop lecithin-based nanoemulsions intended for effective aceclofenac (ACF) skin delivery utilizing sucrose esters [sucrose palmitate (SP) and sucrose stearate (SS)] as additional stabilizers and penetration enhancers. To find the suitable surfactant mixtures and levels of process variables (homogenization pressure and number of cycles - high pressure homogenization manufacturing method) that result in drug-loaded nanoemulsions with minimal droplet size and narrow size distribution, a combined mixture-process experimental design was employed. Based on optimization data, selected nanoemulsions were evaluated regarding morphology, surface charge, drug-excipient interactions, physical stability, and in vivo skin performances (skin penetration and irritation potential). The predicted physicochemical properties and storage stability were proved satisfying for ACF-loaded nanoemulsions containing 2% of SP in the blend with 0%-1% of SS and 1%-2% of egg lecithin (produced at 50°C/20 cycles/800 bar). Additionally, the in vivo tape stripping demonstrated superior ACF skin absorption from these nanoemulsions, particularly from those containing 2% of SP, 0.5% of SS, and 1.5% of egg lecithin, when comparing with the sample costabilized by conventional surfactant - polysorbate 80. In summary, the combined mixture-process experimental design was shown as a feasible tool for formulation development of multisurfactant-based nanosized delivery systems with potentially improved overall product performances. PMID:26539935

  16. Sonophotolytic degradation of synthetic pharmaceutical wastewater: statistical experimental design and modeling.

    PubMed

    Ghafoori, Samira; Mowla, Amir; Jahani, Ramtin; Mehrvar, Mehrab; Chan, Philip K

    2015-03-01

    The merits of the sonophotolysis as a combination of sonolysis (US) and photolysis (UV/H2O2) are investigated in a pilot-scale external loop airlift sonophotoreactor for the treatment of a synthetic pharmaceutical wastewater (SPWW). In the first part of this study, the multivariate experimental design is carried out using Box-Behnken design (BBD). The effluent is characterized by the total organic carbon (TOC) percent removal as a surrogate parameter. The results indicate that the response of the TOC percent removal is significantly affected by the synergistic effects of the linear term of H2O2 dosage and ultrasound power with the antagonistic effect of quadratic term of H2O2 dosage. The statistical analysis of the results indicates a satisfactory prediction of the system behavior by the developed model. In the second part of this study, a novel rigorous mathematical model for the sonophotolytic process is developed to predict the TOC percent removal as a function of time. The mathematical model is based on extensively accepted sonophotochemical reactions and the rate constants in advanced oxidation processes. A good agreement between the model predictions and experimental data indicates that the proposed model could successfully describe the sonophotolysis of the pharmaceutical wastewater. PMID:25460426

  17. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung

    2013-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  18. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung

    2012-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  19. Network Pharmacology Strategies Toward Multi-Target Anticancer Therapies: From Computational Models to Experimental Design Principles

    PubMed Central

    Tang, Jing; Aittokallio, Tero

    2014-01-01

    Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.

  20. Quantifying the effect of experimental design choices for in vitro scratch assays.

    PubMed

    Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J

    2016-07-01

    Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment. PMID:27086040