Sample records for taguchi methods applied

  1. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  2. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  3. Application of Taguchi methods to infrared window design

    NASA Astrophysics Data System (ADS)

    Osmer, Kurt A.; Pruszynski, Charles J.

    1990-10-01

    Dr. Genichi Taguchi, a prominent quality consultant, reduced a branch of statistics known as "Design of Experiments" to a cookbook methodology that can be employed by any competent engineer. This technique has been extensively employed by Japanese manufacturers, and is widely credited with helping them attain their current level of success in low cost, high quality product design and fabrication. Although this technique was originally put forth as a tool to streamline the determination of improved production processes, it can also be applied to a wide range of engineering problems. As part of an internal research project, this method of experimental design has been adapted to window trade studies and materials research. Two of these analyses are presented herein, and have been chosen to illustrate the breadth of applications to which the Taguchi method can be utilized.

  4. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  5. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  6. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  7. Taguchi method for partial differential equations with application in tumor growth.

    PubMed

    Ilea, M; Turnea, M; Rotariu, M; Arotăriţei, D; Popescu, Marilena

    2014-01-01

    The growth of tumors is a highly complex process. To describe this process, mathematical models are needed. A variety of partial differential mathematical models for tumor growth have been developed and studied. Most of those models are based on the reaction-diffusion equations and mass conservation law. A variety of modeling strategies have been developed, each focusing on tumor growth. Systems of time-dependent partial differential equations occur in many branches of applied mathematics. The vast majority of mathematical models in tumor growth are formulated in terms of partial differential equations. We propose a mathematical model for the interactions between these three cancer cell populations. The Taguchi methods are widely used by quality engineering scientists to compare the effects of multiple variables, together with their interactions, with a simple and manageable experimental design. In Taguchi's design of experiments, variation is more interesting to study than the average. First, Taguchi methods are utilized to search for the significant factors and the optimal level combination of parameters. Except the three parameters levels, other factors levels other factors levels would not be considered. Second, cutting parameters namely, cutting speed, depth of cut, and feed rate are designed using the Taguchi method. Finally, the adequacy of the developed mathematical model is proved by ANOVA. According to the results of ANOVA, since the percentage contribution of the combined error is as small. Many mathematical models can be quantitatively characterized by partial differential equations. The use of MATLAB and Taguchi method in this article illustrates the important role of informatics in research in mathematical modeling. The study of tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.

  8. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  9. A comparative study of electrochemical machining process parameters by using GA and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, S. K.; Thomas, B.

    2017-11-01

    In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.

  10. The parameters effect on the structural performance of damaged steel box beam using Taguchi method

    NASA Astrophysics Data System (ADS)

    El-taly, Boshra A.; Abd El Hameed, Mohamed F.

    2018-03-01

    In the current study, the influence of notch or opening parameters and the positions of the applied load on the structural performance of steel box beams up to failure was investigated using Finite Element analysis program, ANSYS. The Taguchi-based design of experiments technique was used to plan the current study. The plan included 12 box steel beams; three intact beams, and nine damaged beams (with opening) in the beams web. The numerical studies were conducted under varying the spacing between the two concentrated point loads (location of applied loads), the notch (opening) position, and the ratio between depth and width of the notch with a constant notch area. According to Taguchi analysis, factor X (location of the applied loads) was found the highest contributing parameters for the variation of the ultimate load, vertical deformation, shear stresses, and the compressive normal stresses.

  11. Optimization the mechanical properties of coir-luffa cylindrica filled hybrid composites by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Krishnudu, D. Mohana; Sreeramulu, D.; Reddy, P. Venkateshwar

    2018-04-01

    In the current study mechanical properties of particles filled hybrid composites have been studied. The mechanical properties of the hybrid composite mainly depend on the proportions of the coir weight, Luffa weight and filler weight. RSM along with Taguchi method have been applied to find the optimized parameters of the hybrid composites. From the current study it was observed that the tensile strength of the composite mainly depends on the coir percent than the other two particles.

  12. Developing an Optimum Protocol for Thermoluminescence Dosimetry with GR-200 Chips using Taguchi Method.

    PubMed

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-06-15

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (°C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (°C), Pre-heat Time (s), Heating Rate (°C/s), Maximum Temperature of Readout (°C), readout time (s) and Storage Temperature (°C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Interactive design optimization of magnetorheological-brake actuators using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Erol, Ozan; Gurocak, Hakan

    2011-10-01

    This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.

  14. Application of Taguchi optimization on the cassava starch wastewater electrocoagulation using batch recycle method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Hermida, L.; Suwardi

    2017-11-01

    Tapioca waste water is very difficult to treat; hence many tapioca factories could not treat it well. One of method which able to overcome this problem is electrodeposition. This process has high performance when it conducted using batch recycle process and use aluminum bipolar electrode. However, the optimum operation conditions are having a significant effect in the tapioca wastewater treatment using bath recycle process. In this research, The Taguchi method was successfully applied to know the optimum condition and the interaction between parameters in electrocoagulation process. The results show that current density, conductivity, electrode distance, and pH have a significant effect on the turbidity removal of cassava starch waste water.

  15. Investigation of Structures of Microwave Microelectromechanical-System Switches by Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Lin; Lin, Chien-Hung

    2007-10-01

    The optimal design of microwave microelectromechanical-system (MEMS) switches by the Taguchi method is presented. The structures of the switches are analyzed and optimized in terms of the effective stiffness constant, the maximum von Mises stress, and the natural frequency in order to improve the reliability and the performance of the MEMS switches. There are four factors, each of which has three levels in the Taguchi method for the MEMS switches. An L9(34) orthogonal array is used for the matrix experiments. The characteristics of the experiments are studied by the finite-element method and the analytical method. The responses of the signal-to-noise (S/N) ratios of the characteristics of the switches are investigated. The statistical analysis of variance (ANOVA) is used to interpret the experimental results and decide the significant factors. The final optimum setting, A1B3C1D2, predicts that the effective stiffness constant is 1.06 N/m, the maximum von Mises stress is 76.9 MPa, and the natural frequency is 29.331 kHz. The corresponding switching time is 34 μs, and the pull-down voltage is 9.8 V.

  16. Optimization of radial-type superconducting magnetic bearing using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi

    2018-07-01

    It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.

  17. Application of Taguchi methods to dual mixture ratio propulsion system optimization for SSTO vehicles

    NASA Technical Reports Server (NTRS)

    Stanley, Douglas O.; Unal, Resit; Joyner, C. R.

    1992-01-01

    The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.

  18. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  19. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  20. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at

  1. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  2. Workspace design for crane cabins applying a combined traditional approach and the Taguchi method for design of experiments.

    PubMed

    Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana

    2016-01-01

    Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and Taguchi approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention.

  3. Incorporating Servqual-QFD with Taguchi Design for optimizing service quality design

    NASA Astrophysics Data System (ADS)

    Arbi Hadiyat, M.

    2018-03-01

    Deploying good service design in service companies has been updated issue in improving customer satisfaction, especially based on the level of service quality measured by Parasuraman’s SERVQUAL. Many researchers have been proposing methods in designing the service, and some of them are based on engineering viewpoint, especially by implementing the QFD method or even using robust Taguchi method. The QFD method would found the qualitative solution by generating the “how’s”, while Taguchi method gives more quantitative calculation in optimizing best solution. However, incorporating both QFD and Taguchi has been done in this paper and yields better design process. The purposes of this research is to evaluate the incorporated methods by implemented it to a case study, then analyze the result and see the robustness of those methods to customer perception of service quality. Started by measuring service attributes using SERVQUAL and find the improvement with QFD, the deployment of QFD solution then generated by defining Taguchi factors levels and calculating the Signal-to-noise ratio in its orthogonal array, and optimized Taguchi response then found. A case study was given for designing service in local bank. Afterward, the service design obtained from previous analysis was then evaluated and shows that it was still meet the customer satisfaction. Incorporating QFD and Taguchi has performed well and can be adopted and developed for another research for evaluating the robustness of result.

  4. Dysprosium sorption by polymeric composite bead: robust parametric optimization using Taguchi method.

    PubMed

    Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal

    2015-03-06

    Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Taguchi optimization: Case study of gold recovery from amalgamation tailing by using froth flotation method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Aji, B. B.; Sumardi, S.; Mufakir, F. R.; Junaidi, A.; Nurjaman, F.; Karna, Aziza, Aulia

    2017-01-01

    Gold amalgamation process was widely used to treat gold ore. This process produces the tailing or amalgamation solid waste, which still contains gold at 8-9 ppm. Froth flotation is one of the promising methods to beneficiate gold from this tailing. However, this process requires optimal conditions which depends on the type of raw material. In this study, Taguchi method was used to optimize the optimum conditions of the froth flotation process. The Taguchi optimization shows that the gold recovery was strongly influenced by the particle size which is the best particle size at 150 mesh followed by the Potassium amyl xanthate concentration, pH and pine oil concentration at 1133.98, 4535.92 and 68.04 gr/ton amalgamation tailing, respectively.

  6. Application of Taguchi L32 orthogonal array design to optimize copper biosorption by using Spaghnum moss.

    PubMed

    Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil

    2014-09-01

    In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Using Quality Management Methods in Knowledge-Based Organizations. An Approach to the Application of the Taguchi Method to the Process of Pressing Tappets into Anchors

    NASA Astrophysics Data System (ADS)

    Ţîţu, M. A.; Pop, A. B.; Ţîţu, Ș

    2017-06-01

    This paper presents a study on the modelling and optimization of certain variables by using the Taguchi Method with a view to modelling and optimizing the process of pressing tappets into anchors, process conducted in an organization that promotes knowledge-based management. The paper promotes practical concepts of the Taguchi Method and describes the way in which the objective functions are obtained and used during the modelling and optimization of the process of pressing tappets into the anchors.

  8. The Taguchi Method Application to Improve the Quality of a Sustainable Process

    NASA Astrophysics Data System (ADS)

    Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.

    2018-06-01

    Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.

  9. Experimental investigation and optimization of welding process parameters for various steel grades using NN tool and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, Sourabh Kumar; Thomas, Benedict

    2018-04-01

    The term "weldability" has been used to describe a wide variety of characteristics when a material is subjected to welding. In our analysis we perform experimental investigation to estimate the tensile strength of welded joint strength and then optimization of welding process parameters by using taguchi method and Artificial Neural Network (ANN) tool in MINITAB and MATLAB software respectively. The study reveals the influence on weldability of steel by varying composition of steel by mechanical characterization. At first we prepare the samples of different grades of steel (EN8, EN 19, EN 24). The samples were welded together by metal inert gas welding process and then tensile testing on Universal testing machine (UTM) was conducted for the same to evaluate the tensile strength of the welded steel specimens. Further comparative study was performed to find the effects of welding parameter on quality of weld strength by employing Taguchi method and Neural Network tool. Finally we concluded that taguchi method and Neural Network Tool is much efficient technique for optimization.

  10. The Taguchi methodology as a statistical tool for biotechnological applications: a critical appraisal.

    PubMed

    Rao, Ravella Sreenivas; Kumar, C Ganesh; Prakasham, R Shetty; Hobbs, Phil J

    2008-04-01

    Success in experiments and/or technology mainly depends on a properly designed process or product. The traditional method of process optimization involves the study of one variable at a time, which requires a number of combinations of experiments that are time, cost and labor intensive. The Taguchi method of design of experiments is a simple statistical tool involving a system of tabulated designs (arrays) that allows a maximum number of main effects to be estimated in an unbiased (orthogonal) fashion with a minimum number of experimental runs. It has been applied to predict the significant contribution of the design variable(s) and the optimum combination of each variable by conducting experiments on a real-time basis. The modeling that is performed essentially relates signal-to-noise ratio to the control variables in a 'main effect only' approach. This approach enables both multiple response and dynamic problems to be studied by handling noise factors. Taguchi principles and concepts have made extensive contributions to industry by bringing focused awareness to robustness, noise and quality. This methodology has been widely applied in many industrial sectors; however, its application in biological sciences has been limited. In the present review, the application and comparison of the Taguchi methodology has been emphasized with specific case studies in the field of biotechnology, particularly in diverse areas like fermentation, food processing, molecular biology, wastewater treatment and bioremediation.

  11. Taguchi's technique: an effective method for improving X-ray medical radiographic screen performance.

    PubMed

    Vlachogiannis, J G

    2003-01-01

    Taguchi's technique is a helpful tool to achieve experimental optimization of a large number of decision variables with a small number of off-line experiments. The technique appears to be an ideal tool for improving the performance of X-ray medical radiographic screens under a noise source. Currently there are very many guides available for improving the efficiency of X-ray medical radiographic screens. These guides can be refined using a second-stage parameter optimization. based on Taguchi's technique, selecting the optimum levels of controllable X-ray radiographic screen factors. A real example of the proposed technique is presented giving certain performance criteria. The present research proposes the reinforcement of X-ray radiography by Taguchi's technique as a novel hardware mechanism.

  12. Optimization of bone drilling parameters using Taguchi method based on finite element analysis

    NASA Astrophysics Data System (ADS)

    Rosidi, Ayip; Lenggo Ginta, Turnad; Rani, Ahmad Majdi Bin Abdul

    2017-05-01

    Thermal necrosis results fracture problems and implant failure if temperature exceeds 47 °C for one minute during bone drilling. To solve this problem, this work studied a new thermal model by using three drilling parameters: drill diameter, feed rate and spindle speed. Effects of those parameters to heat generation were studied. The drill diameters were 4 mm, 6 mm and 6 mm; the feed rates were 80 mm/min, 100 mm/min and 120 mm/min whereas the spindle speeds were 400 rpm, 500 rpm and 600 rpm then an optimization was done by Taguchi method to which combination parameter can be used to prevent thermal necrosis during bone drilling. The results showed that all the combination of parameters produce confidence results which were below 47 °C and finite element analysis combined with Taguchi method can be used for predicting temperature generation and optimizing bone drilling parameters prior to clinical bone drilling. All of the combination parameters can be used for surgeon to achieve sustainable orthopaedic surgery.

  13. An Efficient Taguchi Approach for the Performance Optimization of Health, Safety, Environment and Ergonomics in Generation Companies.

    PubMed

    Azadeh, Ali; Sheikhalishahi, Mohammad

    2015-06-01

    A unique framework for performance optimization of generation companies (GENCOs) based on health, safety, environment, and ergonomics (HSEE) indicators is presented. To rank this sector of industry, the combination of data envelopment analysis (DEA), principal component analysis (PCA), and Taguchi are used for all branches of GENCOs. These methods are applied in an integrated manner to measure the performance of GENCO. The preferred model between DEA, PCA, and Taguchi is selected based on sensitivity analysis and maximum correlation between rankings. To achieve the stated objectives, noise is introduced into input data. The results show that Taguchi outperforms other methods. Moreover, a comprehensive experiment is carried out to identify the most influential factor for ranking GENCOs. The approach developed in this study could be used for continuous assessment and improvement of GENCO's performance in supplying energy with respect to HSEE factors. The results of such studies would help managers to have better understanding of weak and strong points in terms of HSEE factors.

  14. Evaluation of Listeria monocytogenes survival in ice cream mixes flavored with herbal tea using Taguchi method.

    PubMed

    Ozturk, Ismet; Golec, Adem; Karaman, Safa; Sagdic, Osman; Kayacier, Ahmed

    2010-10-01

    In this study, the effects of the incorporation of some herbal teas at different concentrations into the ice cream mix on the population of Listeria monocytogenes were studied using Taguchi method. The ice cream mix samples flavored with herbal teas were prepared using green tea and sage at different concentrations. Afterward, fresh culture of L. monocytogenes was inoculated into the samples and the L. monocytogenes was counted at different storage periods. Taguchi method was used for experimental design and analysis. In addition, some physicochemical properties of samples were examined. Results suggested that there was some effect, although little, on the population of L. monocytogenes when herbal tea was incorporated into the ice cream mix. Additionally, the use of herbal tea caused a decrease in the pH values of the samples and significant changes in the color values.

  15. An Efficient Taguchi Approach for the Performance Optimization of Health, Safety, Environment and Ergonomics in Generation Companies

    PubMed Central

    Azadeh, Ali; Sheikhalishahi, Mohammad

    2014-01-01

    Background A unique framework for performance optimization of generation companies (GENCOs) based on health, safety, environment, and ergonomics (HSEE) indicators is presented. Methods To rank this sector of industry, the combination of data envelopment analysis (DEA), principal component analysis (PCA), and Taguchi are used for all branches of GENCOs. These methods are applied in an integrated manner to measure the performance of GENCO. The preferred model between DEA, PCA, and Taguchi is selected based on sensitivity analysis and maximum correlation between rankings. To achieve the stated objectives, noise is introduced into input data. Results The results show that Taguchi outperforms other methods. Moreover, a comprehensive experiment is carried out to identify the most influential factor for ranking GENCOs. Conclusion The approach developed in this study could be used for continuous assessment and improvement of GENCO's performance in supplying energy with respect to HSEE factors. The results of such studies would help managers to have better understanding of weak and strong points in terms of HSEE factors. PMID:26106505

  16. Study of Dimple Effect on the Friction Characteristics of a Journal Bearing using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Murthy, A. Amar; Raghunandana, Dr.

    2018-02-01

    The effect of producing dimples using chemically etched techniques or by machining process on the surface of a journal bearing bushing to reduce the friction using Taguchi method is investigated. The data used in the present analysis is based on the results obtained by the series of experiments conducted to study the dimples effect on the Stribeck curve. It is statistically proved that producing dimples on the bushing surface of a journal bearing has significant effect on the friction coefficient when used with light oils. Also it is seen that there is an interaction effect between speeds-load and load-dimples. Hence the interaction effect, which are usually neglected should be considered during actual experiments that significantly contributes in reducing the friction in mixed lubrication regime. The experiments, if were conducted after Taguchi method, then the number of experiments would have been reduced to half of the actual set of experiments that were essentially conducted.

  17. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  18. Optimizing Cu(II) removal from aqueous solution by magnetic nanoparticles immobilized on activated carbon using Taguchi method.

    PubMed

    Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita

    2016-01-01

    This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments.

  19. Modified Mahalanobis Taguchi System for Imbalance Data Classification

    PubMed Central

    2017-01-01

    The Mahalanobis Taguchi System (MTS) is considered one of the most promising binary classification algorithms to handle imbalance data. Unfortunately, MTS lacks a method for determining an efficient threshold for the binary classification. In this paper, a nonlinear optimization model is formulated based on minimizing the distance between MTS Receiver Operating Characteristics (ROC) curve and the theoretical optimal point named Modified Mahalanobis Taguchi System (MMTS). To validate the MMTS classification efficacy, it has been benchmarked with Support Vector Machines (SVMs), Naive Bayes (NB), Probabilistic Mahalanobis Taguchi Systems (PTM), Synthetic Minority Oversampling Technique (SMOTE), Adaptive Conformal Transformation (ACT), Kernel Boundary Alignment (KBA), Hidden Naive Bayes (HNB), and other improved Naive Bayes algorithms. MMTS outperforms the benchmarked algorithms especially when the imbalance ratio is greater than 400. A real life case study on manufacturing sector is used to demonstrate the applicability of the proposed model and to compare its performance with Mahalanobis Genetic Algorithm (MGA). PMID:28811820

  20. Mixing behavior of the rhombic micromixers over a wide Reynolds number range using Taguchi method and 3D numerical simulations.

    PubMed

    Chung, C K; Shih, T R; Chen, T C; Wu, B H

    2008-10-01

    A planar micromixer with rhombic microchannels and a converging-diverging element has been systematically investigated by the Taguchi method, CFD-ACE simulations and experiments. To reduce the footprint and extend the operation range of Reynolds number, Taguchi method was used to numerically study the performance of the micromixer in a L(9) orthogonal array. Mixing efficiency is prominently influenced by geometrical parameters and Reynolds number (Re). The four factors in a L(9) orthogonal array are number of rhombi, turning angle, width of the rhombic channel and width of the throat. The degree of sensitivity by Taguchi method can be ranked as: Number of rhombi > Width of the rhombic channel > Width of the throat > Turning angle of the rhombic channel. Increasing the number of rhombi, reducing the width of the rhombic channel and throat and lowering the turning angle resulted in better fluid mixing efficiency. The optimal design of the micromixer in simulations indicates over 90% mixing efficiency at both Re > or = 80 and Re < or = 0.1. Experimental results in the optimal simulations are consistent with the simulated one. This planar rhombic micromixer has simplified the complex fabrication process of the multi-layer or three-dimensional micromixers and improved the performance of a previous rhombic micromixer at a reduced footprint and lower Re.

  1. A Taguchi study of the aeroelastic tailoring design process

    NASA Technical Reports Server (NTRS)

    Bohlmann, Jonathan D.; Scott, Robert C.

    1991-01-01

    A Taguchi study was performed to determine the important players in the aeroelastic tailoring design process and to find the best composition of the optimization's objective function. The Wing Aeroelastic Synthesis Procedure (TSO) was used to ascertain the effects that factors such as composite laminate constraints, roll effectiveness constraints, and built-in wing twist and camber have on the optimum, aeroelastically tailored wing skin design. The results show the Taguchi method to be a viable engineering tool for computational inquiries, and provide some valuable lessons about the practice of aeroelastic tailoring.

  2. Application of Taguchi approach to optimize the sol-gel process of the quaternary Cu2ZnSnS4 with good optical properties

    NASA Astrophysics Data System (ADS)

    Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe

    2016-07-01

    Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.

  3. Optimization of porthole die geometrical variables by Taguchi method

    NASA Astrophysics Data System (ADS)

    Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.

    2017-10-01

    Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.

  4. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  5. Optimization of Injection Molding Parameters for HDPE/TiO₂ Nanocomposites Fabrication with Multiple Performance Characteristics Using the Taguchi Method and Grey Relational Analysis.

    PubMed

    Pervez, Hifsa; Mozumder, Mohammad S; Mourad, Abdel-Hamid I

    2016-08-22

    The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO₂ nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO₂), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young's modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L₉ orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO₂, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO₂ nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO₂ nanocomposites fabricated through the injection molding process.

  6. Permeability Evaluation Through Chitosan Membranes Using Taguchi Design

    PubMed Central

    Sharma, Vipin; Marwaha, Rakesh Kumar; Dureja, Harish

    2010-01-01

    In the present study, chitosan membranes capable of imitating permeation characteristics of diclofenac diethylamine across animal skin were prepared using cast drying method. The effect of concentration of chitosan, concentration of cross-linking agent (NaTPP), crosslinking time was studied using Taguchi design. Taguchi design ranked concentration of chitosan as the most important factor influencing the permeation parameters of diclofenac diethylamine. The flux of the diclofenac diethylamine solution through optimized chitosan membrane (T9) was found to be comparable to that obtained across rat skin. The mathematical model developed using multilinear regression analysis can be used to formulate chitosan membranes that can mimic the desired permeation characteristics. The developed chitosan membranes can be utilized as a substitute to animal skin for in vitro permeation studies. PMID:21179329

  7. Permeability evaluation through chitosan membranes using taguchi design.

    PubMed

    Sharma, Vipin; Marwaha, Rakesh Kumar; Dureja, Harish

    2010-01-01

    In the present study, chitosan membranes capable of imitating permeation characteristics of diclofenac diethylamine across animal skin were prepared using cast drying method. The effect of concentration of chitosan, concentration of cross-linking agent (NaTPP), crosslinking time was studied using Taguchi design. Taguchi design ranked concentration of chitosan as the most important factor influencing the permeation parameters of diclofenac diethylamine. The flux of the diclofenac diethylamine solution through optimized chitosan membrane (T9) was found to be comparable to that obtained across rat skin. The mathematical model developed using multilinear regression analysis can be used to formulate chitosan membranes that can mimic the desired permeation characteristics. The developed chitosan membranes can be utilized as a substitute to animal skin for in vitro permeation studies.

  8. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  9. Optimization of an Optical Inspection System Based on the Taguchi Method for Quantitative Analysis of Point-of-Care Testing

    PubMed Central

    Yeh, Chia-Hsien; Zhao, Zi-Qi; Shen, Pi-Lan; Lin, Yu-Cheng

    2014-01-01

    This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG) strips (cut-off value of the hCG commercial product is 25 mIU/mL) were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS) sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N), the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP). PMID:25256108

  10. Wear behavior of electroless Ni-P-W coating under lubricated condition - a Taguchi based approach

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Arkadeb; Duari, Santanu; Barman, Tapan Kumar; Sahoo, Prasanta

    2016-09-01

    The present study aims to investigate the tribological behavior of electroless Ni-P-W coating under engine oil lubricated condition to ascertain its suitability in automotive applications. Coating is deposited onto mild steel specimens by the electroless method. The experiments are carried out on a pin - on - disc type tribo tester under lubrication. Three tribotesting parameters namely the applied normal load, sliding speed and sliding duration are varied at their three levels and their effects on the wear depth of the deposits are studied. The experiments are carried out based on the combinations available in Taguchi's L27 orthogonal array (OA). Optimization of the tribo-testing parameters is carried out using Taguchi's S/N ratio method to minimize the wear depth. Analysis of variance carried out at a confidence level of 99% indicates that the sliding speed is the most significant parameter in controlling the wear behavior of the deposits. Coating characterization is done using scanning electron microscope, energy dispersive X-ray analysis and X-ray diffraction techniques. It is seen that the wear mechanism under lubricated condition is abrasive in nature.

  11. Optimization of Injection Molding Parameters for HDPE/TiO2 Nanocomposites Fabrication with Multiple Performance Characteristics Using the Taguchi Method and Grey Relational Analysis

    PubMed Central

    Pervez, Hifsa; Mozumder, Mohammad S.; Mourad, Abdel-Hamid I.

    2016-01-01

    The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO2 nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO2), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young’s modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L9 orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO2, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO2 nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO2 nanocomposites fabricated through the injection molding process. PMID:28773830

  12. Rapid development of xylanase assay conditions using Taguchi methodology.

    PubMed

    Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Bhunia, Biswanath

    2016-11-01

    The present investigation is mainly concerned with the rapid development of extracellular xylanase assay conditions by using Taguchi methodology. The extracellular xylanase was produced from Aspergillus niger (KP874102.1), a new strain isolated from a soil sample of the Baramura forest, Tripura West, India. Four physical parameters including temperature, pH, buffer concentration and incubation time were considered as key factors for xylanase activity and were optimized using Taguchi robust design methodology for enhanced xylanase activity. The main effect, interaction effects and optimal levels of the process factors were determined using signal-to-noise (S/N) ratio. The Taguchi method recommends the use of S/N ratio to measure quality characteristics. Based on analysis of the S/N ratio, optimal levels of the process factors were determined. Analysis of variance (ANOVA) was performed to evaluate statistically significant process factors. ANOVA results showed that temperature contributed the maximum impact (62.58%) on xylanase activity, followed by pH (22.69%), buffer concentration (9.55%) and incubation time (5.16%). Predicted results showed that enhanced xylanase activity (81.47%) can be achieved with pH 2, temperature 50°C, buffer concentration 50 Mm and incubation time 10 min.

  13. Experimental study of optimal self compacting concrete with spent foundry sand as partial replacement for M-sand using Taguchi approach

    NASA Astrophysics Data System (ADS)

    Nirmala, D. B.; Raviraj, S.

    2016-06-01

    This paper presents the application of Taguchi approach to obtain optimal mix proportion for Self Compacting Concrete (SCC) containing spent foundry sand and M-sand. Spent foundry sand is used as a partial replacement for M-sand. The SCC mix has seven control factors namely, Coarse aggregate, M-sand with Spent Foundry sand, Cement, Fly ash, Water, Super plasticizer and Viscosity modifying agent. Modified Nan Su method is used to proportion the initial SCC mix. L18 (21×37) Orthogonal Arrays (OA) with the seven control factors having 3 levels is used in Taguchi approach which resulted in 18 SCC mix proportions. All mixtures are extensively tested both in fresh and hardened states to verify whether they meet the practical and technical requirements of SCC. The quality characteristics considering "Nominal the better" situation is applied to the test results to arrive at the optimal SCC mix proportion. Test results indicate that the optimal mix satisfies the requirements of fresh and hardened properties of SCC. The study reveals the feasibility of using spent foundry sand as a partial replacement of M-sand in SCC and also that Taguchi method is a reliable tool to arrive at optimal mix proportion of SCC.

  14. Factors Affecting Optimal Surface Roughness of AISI 4140 Steel in Turning Operation Using Taguchi Experiment

    NASA Astrophysics Data System (ADS)

    Novareza, O.; Sulistiyarini, D. H.; Wiradmoko, R.

    2018-02-01

    This paper presents the result of using Taguchi method in turning process of medium carbon steel of AISI 4140. The primary concern is to find the optimal surface roughness after turning process. The taguchi method is used to get a combination of factors and factor levels in order to get the optimum surface roughness level. Four important factors with three levels were used in experiment based on Taguchi method. A number of 27 experiments were carried out during the research and analysed using analysis of variance (ANOVA) method. The result of surface finish was determined in Ra type surface roughness. The depth of cut was found to be the most important factors for reducing the surface roughness of AISI 4140 steel. On the contrary, the other important factors i.e. spindle speed and rake side angle of the tool were proven to be less factors that affecting the surface finish. It is interesting to see the effect of coolant composition that gained the second important factors to reduce the roughness. It may need further research to explain this result.

  15. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.

  16. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    NASA Astrophysics Data System (ADS)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  17. Optimization of Parameters for Manufacture Nanopowder Bioceramics at Machine Pulverisette 6 by Taguchi and ANOVA Method

    NASA Astrophysics Data System (ADS)

    Van Hoten, Hendri; Gunawarman; Mulyadi, Ismet Hari; Kurniawan Mainil, Afdhal; Putra, Bismantoloa dan

    2018-02-01

    This research is about manufacture nanopowder Bioceramics from local materials used Ball Milling for biomedical applications. Source materials for the manufacture of medicines are plants, animal tissues, microbial structures and engineering biomaterial. The form of raw material medicines is a powder before mixed. In the case of medicines, research is to find sources of biomedical materials that will be in the nanoscale powders can be used as raw material for medicine. One of the biomedical materials that can be used as raw material for medicine is of the type of bioceramics is chicken eggshells. This research will develop methods for manufacture nanopowder material from chicken eggshells with Ball Milling using the Taguchi method and ANOVA. Eggshell milled using a variation of Milling rate on 150, 200 and 250 rpm, the time variation of 1, 2 and 3 hours and variations the grinding balls to eggshell powder weight ratio (BPR) 1: 6, 1: 8, 1: 10. Before milled with Ball Milling crushed eggshells in advance and calcinate to a temperature of 900°C. After the milled material characterization of the fine powder of eggshell using SEM to see its size. The result of this research is optimum parameter of Taguchi Design analysis that is 250 rpm milling rate, 3 hours milling time and BPR is 1: 6 with the average eggshell powder size is 1.305 μm. Milling speed, milling time and ball to powder weight of ratio have contribution successively equal to 60.82%, 30.76% and 6.64% by error equal to 1.78%.

  18. Multi response optimization of internal grinding process parameters for outer ring using Taguchi method and PCR-TOPSIS

    NASA Astrophysics Data System (ADS)

    Wisnuadi, Alief Regyan; Damayanti, Retno Wulan; Pujiyanto, Eko

    2018-02-01

    Bearing is one of the most widely used parts in automotive industry. One of the leading bearing manufacturing companies in the world is SKF Indonesia. This company must produce bearing with international standard. SKF Indonesia must do continuous improvement in order to face competition. During this time, SKF Indonesia is only performing quality control at its Quality Assurance department. In other words, quality improvement at SKF Indonesia has not been done thoroughly. The purpose of this research is to improve quality of outer ring product at SKF Indonesia by conducting an internal grinding process experiment about setting speed ratio, fine position, and spark out grinding time. The specific purpose of this experiment is to optimize some quality responses such as roughness, roundness, and cycle time. All of the response in this experiment were smaller the better. Taguchi method and PCR-TOPSIS are used for the optimization process. The result of this research shows that by using Taguchi method and PCR-TOPSIS, the optimum condition occurs on speed ratio 36, fine position 18 µm/s and spark out 0.5 s. The optimum conditions result were roughness 0.398 µm, roundness 1.78 µm and cycle time 8.1 s. This results have been better than the previous results and meet the standards. The roughness of 0.523 µm decrease to 0.398 µm and the average cycle time of 8.5 s decrease to 8.1 s.

  19. Parameter optimization of flux-aided backing-submerged arc welding by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pu, Juan; Yu, Shengfu; Li, Yuanyuan

    2017-07-01

    Flux-aided backing-submerged arc welding has been conducted on D36 steel with thickness of 20 mm. The effects of processing parameters such as welding current, voltage, welding speed and groove angle on welding quality were investigated by Taguchi method. The optimal welding parameters were predicted and the individual importance of each parameter on welding quality was evaluated by examining the signal-to-noise ratio and analysis of variance (ANOVA) results. The importance order of the welding parameters for the welding quality of weld bead was: welding current > welding speed > groove angle > welding voltage. The welding quality of weld bead increased gradually with increasing welding current and welding speed and decreasing groove angle. The optimum values of the welding current, welding speed, groove angle and welding voltage were found to be 1050 A, 27 cm/min, 40∘ and 34 V, respectively.

  20. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    NASA Astrophysics Data System (ADS)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  1. Preparation of nanocellulose from Imperata brasiliensis grass using Taguchi method.

    PubMed

    Benini, Kelly Cristina Coelho de Carvalho; Voorwald, Herman Jacobus Cornelis; Cioffi, Maria Odila Hilário; Rezende, Mirabel Cerqueira; Arantes, Valdeir

    2018-07-15

    Cellulose nanoparticles (CNs) were prepared by acid hydrolysis of the cellulose pulp extracted from the Brazilian satintail (Imperata Brasiliensis) plant using a conventional and a total chlorine free method. Initially, a statistical design of experiment was carried out using Taguchi orthogonal array to study the hydrolysis parameters, and the main properties (crystallinity, thermal stability, morphology, and sizes) of the nanocellulose. X-ray diffraction (XRD), fourier-transform infrared spectroscopy (FTIR), field-emission scanning electron microscopy (FE-SEM), dynamic light scattering (DLS), zeta potential and thermogravimetric analysis (TGA) were carried out to characterize the physical-chemical properties of the CNs obtained. Cellulose nanoparticles with diameter ranging from 10 to 60 nm and length between 150 and 250 nm were successfully obtained at sulfuric acid concentration of 64% (m/m), temperature 35 °C, reaction time 75 min, and a 1:20 (g/mL) pulp-to-solution ratio. Under this condition, the Imperata Brasiliensis CNs showed good stability in suspension, crystallinity index of 65%, and a cellulose degradation temperature of about 117 °C. Considering that these properties are similar to those of nanocelluloses from other lignocellulosics feedstocks, Imperata grass seems also to be a suitable source for nanocellulose production. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  3. Optimization of laccase production from Marasmiellus palmivorus LA1 by Taguchi method of Design of experiments.

    PubMed

    Chenthamarakshan, Aiswarya; Parambayil, Nayana; Miziriya, Nafeesathul; Soumya, P S; Lakshmi, M S Kiran; Ramgopal, Anala; Dileep, Anuja; Nambisan, Padma

    2017-02-13

    Fungal laccase has profound applications in different fields of biotechnology due to its broad specificity and high redox potential. Any successful application of the enzyme requires large scale production. As laccase production is highly dependent on medium components and cultural conditions, optimization of the same is essential for efficient product production. Production of laccase by fungal strain Marasmiellus palmivorus LA1 under solid state fermentation was optimized by the Taguchi design of experiments (DOE) methodology. An orthogonal array (L8) was designed using Qualitek-4 software to study the interactions and relative influence of the seven selected factors by one factor at a time approach. The optimum condition formulated was temperature (28 °C), pH (5), galactose (0.8%w/v), cupric sulphate (3 mM), inoculum concentration (number of mycelial agar pieces) (6Nos.) and substrate length (0.05 m). Overall yield increase of 17.6 fold was obtained after optimization. Statistical optimization leads to the elimination of an insignificant medium component ammonium dihydrogen phosphate from the process and contributes to a 1.06 fold increase in enzyme production. A final production of 667.4 ± 13 IU/mL laccase activity paves way for the application of this strain for industrial applications. Study optimized lignin degrading laccases from Marasmiellus palmivorus LA1. This laccases can thus be used for further applications in different scales of production after analyzing the properties of the enzyme. Study also confirmed the use of taguchi method for optimizations of product production.

  4. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sudhakara, Dara; Prasanthi, Guvvala

    2017-04-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  5. Design of Maternity Pillow by Using Kansei and Taguchi Methods

    NASA Astrophysics Data System (ADS)

    Ilma Rahmillah, Fety; Nanda kartika, Rachmah

    2017-06-01

    One of the customers’ considerations for purchasing a product is it can satisfy their feeling and emotion. It because of such product can enhance sleep quality of pregnant women. However, most of the existing product such as maternity pillows are still designed based on companies’ perspective. This study aims to capture the desire of pregnant women toward maternity pillow desired product by using kansei words and analyze the optimal design with Taguchi method. Eight collected kansei words were durable, aesthetic, comfort, portable, simple, multifunction, attractive motive, and easy to maintain. While L16 orthogonal array is used because there are three variables with two levels and four variables with four levels. It can be concluded that the best maternity pillow that can satisfy the customers can be designed by combining D1-E2-F2-G2-C1-B2-A2 means the model is U shape, flowery motive, medium color, Bag model B, cotton pillow cover, filled with silicon, and use double zipper. However, it is also possible to create combination of D1-E2-F2-G2-C1-B1-A1 by using consideration of cost which means that the zipper is switched to single as well as filled with dacron. In addition, the total percentage of contribution by using ANOVA reaches 95%.

  6. Application of the Taguchi Method for Optimizing the Process Parameters of Producing Lightweight Aggregates by Incorporating Tile Grinding Sludge with Reservoir Sediments

    PubMed Central

    Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei

    2017-01-01

    This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L16(45) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm3 and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments. PMID:29125576

  7. Application of the Taguchi Method for Optimizing the Process Parameters of Producing Lightweight Aggregates by Incorporating Tile Grinding Sludge with Reservoir Sediments.

    PubMed

    Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei

    2017-11-10

    This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L 16 (4⁵) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm³ and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments.

  8. Application of Taguchi L16 design method for comparative study of ability of 3A zeolite in removal of Rhodamine B and Malachite green from environmental water samples

    NASA Astrophysics Data System (ADS)

    Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba

    2018-01-01

    This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples.

  9. Application of Taguchi L16 design method for comparative study of ability of 3A zeolite in removal of Rhodamine B and Malachite green from environmental water samples.

    PubMed

    Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba

    2018-01-05

    This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Use of Taguchi methodology to enhance the yield of caffeine removal with growing cultures of Pseudomonas pseudoalcaligenes.

    PubMed

    Ashengroph, Morahem; Ababaf, Sajad

    2014-12-01

    Microbial caffeine removal is a green solution for treatment of caffeinated products and agro-industrial effluents. We directed this investigation to optimizing a bio-decaffeination process with growing cultures of Pseudomonas pseudoalcaligenes through Taguchi methodology which is a structured statistical approach that can be lowered variations in a process through Design of Experiments (DOE). Five parameters, i.e. initial fructose, tryptone, Zn(+2) ion and caffeine concentrations and also incubation time selected and an L16 orthogonal array was applied to design experiments with four 4-level factors and one 3-level factor (4(4) × 1(3)). Data analysis was performed using the statistical analysis of variance (ANOVA) method. Furthermore, the optimal conditions were determined by combining the optimal levels of the significant factors and verified by a confirming experiment. Measurement of residual caffeine concentration in the reaction mixture was performed using high-performance liquid chromatography (HPLC). Use of Taguchi methodology for optimization of design parameters resulted in about 86.14% reduction of caffeine in 48 h incubation when 5g/l fructose, 3 mM Zn(+2) ion and 4.5 g/l of caffeine are present in the designed media. Under the optimized conditions, the yield of degradation of caffeine (4.5 g/l) by the native strain of Pseudomonas pseudoalcaligenes TPS8 has been increased from 15.8% to 86.14% which is 5.4 fold higher than the normal yield. According to the experimental results, Taguchi methodology provides a powerful methodology for identifying the favorable parameters on caffeine removal using strain TPS8 which suggests that the approach also has potential application with similar strains to improve the yield of caffeine removal from caffeine containing solutions.

  11. Taguchi experimental design to determine the taste quality characteristic of candied carrot

    NASA Astrophysics Data System (ADS)

    Ekawati, Y.; Hapsari, A. A.

    2018-03-01

    Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.

  12. Rolling bearing fault diagnosis and health assessment using EEMD and the adjustment Mahalanobis-Taguchi system

    NASA Astrophysics Data System (ADS)

    Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin

    2018-01-01

    ABSTRACTSFor the timely identification of the potential faults of a rolling bearing and to observe its health condition intuitively and accurately, a novel fault diagnosis and health assessment model for a rolling bearing based on the ensemble empirical mode decomposition (EEMD) <span class="hlt">method</span> and the adjustment Mahalanobis-<span class="hlt">Taguchi</span> system (AMTS) <span class="hlt">method</span> is proposed. The specific steps are as follows: First, the vibration signal of a rolling bearing is decomposed by EEMD, and the extracted features are used as the input vectors of AMTS. Then, the AMTS <span class="hlt">method</span>, which is designed to overcome the shortcomings of the traditional Mahalanobis-<span class="hlt">Taguchi</span> system and to extract the key features, is proposed for fault diagnosis. Finally, a type of HI concept is proposed according to the results of the fault diagnosis to accomplish the health assessment of a bearing in its life cycle. To validate the superiority of the developed <span class="hlt">method</span> proposed approach, it is compared with other recent <span class="hlt">method</span> and proposed methodology is successfully validated on a vibration data-set acquired from seeded defects and from an accelerated life test. The results show that this <span class="hlt">method</span> represents the actual situation well and is able to accurately and effectively identify the fault type.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1904b0004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1904b0004S"><span>Application of <span class="hlt">Taguchi</span> optimisation of electro metal - electro winning (EMEW) for nickel metal from laterite</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sudibyo, Hermida, L.; Junaedi, A.; Putra, F. A.</p> <p>2017-11-01</p> <p>Nickel and cobalt metal able to process from low grade laterite using solvent extraction and electrowinning. One of electrowinning <span class="hlt">methods</span> which has good performance to produce pure metal is electrometal-electrowinninge(EMEW). In this work, solventextraction was used to separate nickel and cobalt which useCyanex-Versatic Acid in toluene as an organic phase. An aqueous phase of extraction was processed using EMEW in order to deposit the nickel metal in Cathode electrode. The parameters which used in this work were batch temperature, operation time, voltage, and boric acid concentration. Those parameters were studied and optimized using the design of experiment of <span class="hlt">Taguchi</span>. The <span class="hlt">Taguchi</span> analysis result shows that the optimum result of EMEW was at 60°C of batch temperature, 2 Voltage, 6 hours operation and 0.5 M of boric acid.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ResPh...7.3287G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ResPh...7.3287G"><span>Absolute variation of the mechanical characteristics of halloysite reinforced polyurethane nanocomposites complemented by <span class="hlt">Taguchi</span> and ANOVA approaches</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H.; Nassir, Mohamed H.; Al-Amiery, Ahmed A.</p> <p></p> <p>The variation of the results of the mechanical properties of halloysite nanotubes (HNTs) reinforced thermoplastic polyurethane (TPU) at different HNTs loadings was implemented as a tool for analysis. The preparation of HNTs-TPU nanocomposites was performed under four controlled parameters of mixing temperature, mixing speed, mixing time, and HNTs loading at three levels each to satisfy <span class="hlt">Taguchi</span> <span class="hlt">method</span> orthogonal array L9 aiming to optimize these parameters for the best measurements of tensile strength, Young's modulus, and tensile strain (known as responses). The maximum variation of the experimental results for each response was determined and analysed based on the optimized results predicted by <span class="hlt">Taguchi</span> <span class="hlt">method</span> and ANOVA. It was found that the maximum absolute variations of the three mentioned responses are 69%, 352%, and 126%, respectively. The analysis has shown that the preparation of the optimized tensile strength requires 1 wt.% HNTs loading (excluding 2 wt.% and 3 wt.%), mixing temperature of 190 °C (excluding 200 °C and 210 °C), and mixing speed of 30 rpm (excluding 40 rpm and 50 rpm). In addition, the analysis has determined that the mixing time at 20 min has no effect on the preparation. The mentioned analysis was fortified by ANOVA, images of FESEM, and DSC results. Seemingly, the agglomeration and distribution of HNTs in the nanocomposite play an important role in the process. The outcome of the analysis could be considered as a very important step towards the reliability of <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25295306','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25295306"><span>SVM-RFE based feature selection and <span class="hlt">Taguchi</span> parameters optimization for multiclass SVM classifier.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru</p> <p>2014-01-01</p> <p>Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, <span class="hlt">Taguchi</span> <span class="hlt">method</span> was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and <span class="hlt">Taguchi</span> parameter optimization for Dermatology and Zoo databases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4175386','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4175386"><span>SVM-RFE Based Feature Selection and <span class="hlt">Taguchi</span> Parameters Optimization for Multiclass SVM Classifier</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru</p> <p>2014-01-01</p> <p>Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, <span class="hlt">Taguchi</span> <span class="hlt">method</span> was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and <span class="hlt">Taguchi</span> parameter optimization for Dermatology and Zoo databases. PMID:25295306</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5690196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5690196"><span>Quantification of dental prostheses on cone‐beam CT images by the <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuo, Rong‐Fu; Fang, Kwang‐Ming; TY, Wong</p> <p>2016-01-01</p> <p>The gray values accuracy of dental cone‐beam computed tomography (CBCT) is affected by dental metal prostheses. The distortion of dental CBCT gray values could lead to inaccuracies of orthodontic and implant treatment. The aim of this study was to quantify the effect of scanning parameters and dental metal prostheses on the accuracy of dental cone‐beam computed tomography (CBCT) gray values using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Eight dental model casts of an upper jaw including prostheses, and a ninth prosthesis‐free dental model cast, were scanned by two dental CBCT devices. The mean gray value of the selected circular regions of interest (ROIs) were measured using dental CBCT images of eight dental model casts and were compared with those measured from CBCT images of the prosthesis‐free dental model cast. For each image set, four consecutive slices of gingiva were selected. The seven factors (CBCTs, occlusal plane canting, implant connection, prosthesis position, coping material, coping thickness, and types of dental restoration) were used to evaluate scanning parameter and dental prostheses effects. Statistical <span class="hlt">methods</span> of signal to noise ratio (S/N) and analysis of variance (ANOVA) with 95% confidence were <span class="hlt">applied</span> to quantify the effects of scanning parameters and dental prostheses on dental CBCT gray values accuracy. For ROIs surrounding dental prostheses, the accuracy of CBCT gray values were affected primarily by implant connection (42%), followed by type of restoration (29%), prostheses position (19%), coping material (4%), and coping thickness (4%). For a single crown prosthesis (without support of implants) placed in dental model casts, gray value differences for ROIs 1–9 were below 12% and gray value differences for ROIs 13–18 away from prostheses were below 10%. We found the gray value differences set to be between 7% and 8% for regions next to a single implant‐supported titanium prosthesis, and between 46% and 59% for regions between double implant</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22608478-nitric-acid-treated-multi-walled-carbon-nanotubes-optimized-taguchi-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22608478-nitric-acid-treated-multi-walled-carbon-nanotubes-optimized-taguchi-method"><span>Nitric acid treated multi-walled carbon nanotubes optimized by <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shamsuddin, Shahidah Arina; Hashim, Uda; Halim, Nur Hamidah Abdul</p> <p></p> <p>Electron transfer rate (ETR) of CNTs can be enhanced by increasing the amounts of COOH groups to their wall and opened tips. With the aim to achieve the highest production amount of COOH, <span class="hlt">Taguchi</span> robust design has been used for the first time to optimize the surface modification of MWCNTs by nitric acid oxidation. Three main oxidation parameters which are concentration of acid, treatment temperature and treatment time have been selected as the control factors that will be optimized. The amounts of COOH produced are measured by using FTIR spectroscopy through the absorbance intensity. From the analysis, we found thatmore » acid concentration and treatment time had the most important influence on the production of COOH. Meanwhile, the treatment temperature will only give intermediate effect. The optimum amount of COOH can be achieved with the treatment by 8.0 M concentration of nitric acid at 120 °C for 2 hour.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5873005','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5873005"><span>Multi-Response Optimization of Resin Finishing by Using a <span class="hlt">Taguchi</span>-Based Grey Relational Analysis</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shafiq, Faizan; Sarwar, Zahid; Jilani, Muhammad Munib; Cai, Yingjie</p> <p>2018-01-01</p> <p>In this study, the influence and optimization of the factors of a non-formaldehyde resin finishing process on cotton fabric using a <span class="hlt">Taguchi</span>-based grey relational analysis were experimentally investigated. An L27 orthogonal array was selected for five parameters and three levels by <span class="hlt">applying</span> Taguchi’s design of experiments. The <span class="hlt">Taguchi</span> technique was coupled with a grey relational analysis to obtain a grey relational grade for evaluating multiple responses, i.e., crease recovery angle (CRA), tearing strength (TE), and whiteness index (WI). The optimum parameters (values) for resin finishing were the resin concentration (80 g·L−1), the polyethylene softener (40 g·L−1), the catalyst (25 g·L−1), the curing temperature (140 °C), and the curing time (2 min). The goodness-of-fit of the data was validated by an analysis of variance (ANOVA). The optimized sample was characterized by Fourier-transform infrared (FTIR) spectroscopy, thermogravimetric analysis (TGA), and scanning electron microscope (SEM) to better understand the structural details of the resin finishing process. The results showed an improved thermal stability and confirmed the presence of well deposited of resin on the optimized fabric surface. PMID:29543724</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..184a2048H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..184a2048H"><span>Flank wear analysing of high speed end milling for hardened steel D2 using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hazza Faizi Al-Hazza, Muataz; Ibrahim, Nur Asmawiyah bt; Adesta, Erry T. Y.; Khan, Ahsan Ali; Abdullah Sidek, Atiah Bt.</p> <p>2017-03-01</p> <p>One of the main challenges for any manufacturer is how to decrease the machining cost without affecting the final quality of the product. One of the new advanced machining processes in industry is the high speed hard end milling process that merges three advanced machining processes: high speed milling, hard milling and dry milling. However, one of the most important challenges in this process is to control the flank wear rate. Therefore a analyzing the flank wear rate during machining should be investigated in order to determine the best cutting levels that will not affect the final quality of the product. In this research <span class="hlt">Taguchi</span> <span class="hlt">method</span> has been used to investigate the effect of cutting speed, feed rate and depth of cut and determine the best level s to minimize the flank wear rate up to total length of 0.3mm based on the ISO standard to maintain the finishing requirements.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_2");'>2</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li class="active"><span>4</span></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_4 --> <div id="page_5" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="81"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007OptLT..39..786L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007OptLT..39..786L"><span>Study of optimal laser parameters for cutting QFN packages by <span class="hlt">Taguchi</span>'s matrix <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Chen-Hao; Tsai, Ming-Jong; Yang, Ciann-Dong</p> <p>2007-06-01</p> <p>This paper reports the study of optimal laser parameters for cutting QFN (Quad Flat No-lead) packages by using a diode pumped solid-state laser system (DPSSL). The QFN cutting path includes two different materials, which are the encapsulated epoxy and a copper lead frame substrate. The <span class="hlt">Taguchi</span>'s experimental <span class="hlt">method</span> with orthogonal array of L 9(3 4) is employed to obtain optimal combinatorial parameters. A quantified mechanism was proposed for examining the laser cutting quality of a QFN package. The influences of the various factors such as laser current, laser frequency, and cutting speed on the laser cutting quality is also examined. From the experimental results, the factors on the cutting quality in the order of decreasing significance are found to be (a) laser frequency, (b) cutting speed, and (c) laser driving current. The optimal parameters were obtained at the laser frequency of 2 kHz, the cutting speed of 2 mm/s, and the driving current of 29 A. Besides identifying this sequence of dominance, matrix experiment also determines the best level for each control factor. The verification experiment confirms that the application of laser cutting technology to QFN is very successfully by using the optimal laser parameters predicted from matrix experiments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21508553','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21508553"><span>Oily wastewater treatment by ultrafiltration using <span class="hlt">Taguchi</span> experimental design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Salahi, A; Mohammadi, T</p> <p>2011-01-01</p> <p>In this research, results of an experimental investigation on separation of oil from a real oily wastewater using an ultrafiltration (UF) polymeric membrane are presented. In order to enhance the performance of UF in API separator effluent treatment and to get more permeation flux (PF), effects of operating factors on the yield of PF were studied. Five factors at four levels were investigated: trans-membrane pressure (TMP), temperature (T), cross flow velocity (CFV), pH and salt concentration (SC). <span class="hlt">Taguchi</span> <span class="hlt">method</span> (L(16) orthogonal array (OA)) was used. Analysis of variance (ANOVA) was <span class="hlt">applied</span> to calculate sum of square, variance, error variance and contribution percentage of each factor on response. The optimal levels thus determined for the four influential factors were: TMP, 3 bar; T, 40˚C; CFV, 1.0 m/s; SC, 25 g/L and pH, 8. The results showed that CFV and SC are the most and the least effective factors on PF, respectively. Increasing CFV, TMP, T and pH caused the better performance of UF membrane process due to enhancement of driving force and fouling residence. Also, effects of oil concentration (OC) in the wastewater on PF and total organic carbon (TOC) rejection were investigated. Finally, the highest TOC rejection was found to be 85%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10445E..5GK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10445E..5GK"><span>Application of <span class="hlt">Taguchi</span> <span class="hlt">method</span> to optimization of surface roughness during precise turning of NiTi shape memory alloy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kowalczyk, M.</p> <p>2017-08-01</p> <p>This paper describes the research results of surface quality research after the NiTi shape memory alloy (Nitinol) precise turning by the tools with edges made of polycrystalline diamonds (PCD). Nitinol, a nearly equiatomic nickel-titanium shape memory alloy, has wide applications in the arms industry, military, medicine and aerospace industry, and industrial robots. Due to their specific properties NiTi alloys are known to be difficult-to-machine materials particularly by using conventional techniques. The research trials were conducted for three independent parameters (vc, f, ap) affecting the surface roughness were analyzed. The choice of parameter configurations were performed by factorial design <span class="hlt">methods</span> using orthogonal plan type L9, with three control factors, changing on three levels, developed by G. <span class="hlt">Taguchi</span>. S/N ratio and ANOVA analyses were performed to identify the best of cutting parameters influencing surface roughness.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15957751','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15957751"><span>Optimal design of loudspeaker arrays for robust cross-talk cancellation using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> and the genetic algorithm.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung</p> <p>2005-05-01</p> <p>An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25500858','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25500858"><span>A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and <span class="hlt">Taguchi</span> optimization methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar</p> <p>2014-11-01</p> <p>Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and <span class="hlt">Taguchi</span> methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using <span class="hlt">Taguchi</span>'s L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. <span class="hlt">Taguchi</span> analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JIEIC..97..547K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JIEIC..97..547K"><span>Wear Evaluation of AISI 4140 Alloy Steel with WC/C Lamellar Coatings Sliding Against EN 8 Using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadam, Nikhil Rajendra; Karthikeyan, Ganesarethinam</p> <p>2016-10-01</p> <p>The purpose of the experiments in this paper is to use the <span class="hlt">Taguchi</span> <span class="hlt">methods</span> to investigate the wear of WC/C coated nitrided AISI 4140 alloy steel. A study of lamellar WC/C coating which were deposited by a physical vapor deposition on nitrided AISI 4140 alloy steel. The investigation includes wear evaluation using Pin-on-disk configuration. When WC/C coated AISI 4140 alloy steel slides against EN 8 steel, it was found that carbon-rich coatings show much lower wear of the countersurface than nitrogen-rich coatings. The results were correlated with the properties determined from tribological and mechanical characterization, therefore by probably selecting the proper processing parameters the deposition of WC/C coating results in decreasing the wear rate of the substrate which shows a potential for tribological application.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.16201003M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.16201003M"><span>Optimization of reactive-ion etching (RIE) parameters for fabrication of tantalum pentoxide (Ta2O5) waveguide using <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Muttalib, M. Firdaus A.; Chen, Ruiqi Y.; Pearce, S. J.; Charlton, Martin D. B.</p> <p>2017-11-01</p> <p>In this paper, we demonstrate the optimization of reactive-ion etching (RIE) parameters for the fabrication of tantalum pentoxide (Ta2O5) waveguide with chromium (Cr) hard mask in a commercial OIPT Plasmalab 80 RIE etcher. A design of experiment (DOE) using <span class="hlt">Taguchi</span> <span class="hlt">method</span> was implemented to find optimum RF power, mixture of CHF3 and Ar gas ratio, and chamber pressure for a high etch rate, good selectivity, and smooth waveguide sidewall. It was found that the optimized etch condition obtained in this work were RF power = 200 W, gas ratio = 80 %, and chamber pressure = 30 mTorr with an etch rate of 21.6 nm/min, Ta2O5/Cr selectivity ratio of 28, and smooth waveguide sidewall.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.908a2009M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.908a2009M"><span>Evaluation on the feasibility of using bamboo fillers in plastic gear manufacturing via the <span class="hlt">Taguchi</span> optimization <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mehat, N. M.; Kamaruddin, S.</p> <p>2017-10-01</p> <p>An increase in demand for industrial gears has instigated the escalating uses of plastic-matrix composites, particularly carbon or glass fibre reinforced plastics as gear material to enhance the properties and limitation in plastic gears. However, the production of large quantity of these synthetic fibres reinforced composites has posed serious threat to ecosystem. Therefore, this work is conducted to study the applicability and practical ability of using bamboo fillers particularly in plastic gear manufacturing as opposed to synthetic fibres via the <span class="hlt">Taguchi</span> optimization <span class="hlt">method</span>. The results showed that no failure mechanism such as gear tooth root cracking and severe tooth wear were observed in gear tested made of 5-30 wt% of bamboo fillers in comparing with the unfilled PP gear. These results indicated that bamboo can be practically and economically used as an alternative filler in plastic material reinforcement as well as in minimizing the cost of raw material in general.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3995665','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3995665"><span>Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues. The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24620852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24620852"><span>Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Zhengsuo; Deng, Hongbo; Chen, Can; Yang, Ying; Xu, Heng</p> <p>2014-03-12</p> <p>Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues.The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..184a2047H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..184a2047H"><span>Surface Roughness Optimization Using <span class="hlt">Taguchi</span> <span class="hlt">Method</span> of High Speed End Milling For Hardened Steel D2</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hazza Faizi Al-Hazza, Muataz; Ibrahim, Nur Asmawiyah bt; Adesta, Erry T. Y.; Khan, Ahsan Ali; Abdullah Sidek, Atiah Bt.</p> <p>2017-03-01</p> <p>The main challenge for any manufacturer is to achieve higher quality of their final products with maintains minimum machining time. In this research final surface roughness analysed and optimized with maximum 0.3 mm flank wear length. The experiment was investigated the effect of cutting speed, feed rate and depth of cut on the final surface roughness using D2 as a work piece hardened to 52-56 HRC, and coated carbide as cutting tool with higher cutting speed 120-240 mm/min. The experiment has been conducted using L9 design of <span class="hlt">Taguchi</span> collection. The results have been analysed using JMP software.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..324a2054L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..324a2054L"><span>Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak</p> <p>2018-03-01</p> <p>The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using <span class="hlt">Taguchi</span> L9 orthogonal methodology was proposed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..352a2002Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..352a2002Y"><span>Application of <span class="hlt">taguchi</span> <span class="hlt">method</span> for selection parameter bleaching treatments against mechanical and physical properties of agave cantala fiber</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yudhanto, F.; Jamasri; Rochardjo, Heru S. B.</p> <p>2018-05-01</p> <p>The characterized agave cantala fiber in this research came from Sumenep, Madura, Indonesia was chemically processed using sodium hydroxide (NaOH) and hydrogen peroxide (H2O2) solution. The treatment with both solutions is called bleaching process. Tensile strength test of single fiber was used to get mechanical properties from selecting process of the various parameter are temperature, PH and concentration of H2O2 with an L9 orthogonal array by <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The results indicate that PH is most significant parameter influencing the tensile strength followed by temperature and concentration H2O2. The influence of bleaching treatment on tensile strength showed increasing of crystallinity index of fiber by 21%. It showed by lost of hemicellulose and lignin layers of fiber can be seen from waveforms changes of 1735 (C=O), 1627 (OH), 1319 (CH2), 1250 (C-O) by FTIR graph. The photo SEM showed that the bleaching of fibers causes the fibers more roughly and clearly than untreated fibers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28886524','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28886524"><span>Simultaneous quantification of arginine, alanine, methionine and cysteine amino acids in supplements using a novel bioelectro-nanosensor based on CdSe quantum dot/modified carbon nanotube hollow fiber pencil graphite electrode via <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hooshmand, Sara; Es'haghi, Zarrin</p> <p>2017-11-30</p> <p>A number of four amino acids have been simultaneously determined at CdSe quantum dot-modified/multi-walled carbon nanotube hollow fiber pencil graphite electrode in different bodybuilding supplements. CdSe quantum dots were synthesized and <span class="hlt">applied</span> to construct a modified carbon nanotube hollow fiber pencil graphite electrode. FT-IR, TEM, XRD and EDAX <span class="hlt">methods</span> were <span class="hlt">applied</span> for characterization of the synthesized CdSe QDs. The electro-oxidation of arginine (Arg), alanine (Ala), methionine (Met) and cysteine (Cys) at the surface of the modified electrode was studied. Then the <span class="hlt">Taguchi</span>'s <span class="hlt">method</span> was <span class="hlt">applied</span> using MINITAB 17 software to find out the optimum conditions for the amino acids determination. Under the optimized conditions, the differential pulse (DP) voltammetric peak currents of Arg, Ala, Met and Cys increased linearly with their concentrations in the ranges of 0.287-33670μM and detection limits of 0.081, 0.158, 0.094 and 0.116μM were obtained for them, respectively. Satisfactory results were achieved for calibration and validation sets. The prepared modified electrode represents a very good resolution between the voltammetric peaks of the four amino acids which makes it suitable for the detection of each in presence of others in real samples. Copyright © 2017. Published by Elsevier B.V.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28773815','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28773815"><span>Improvement of the Mechanical Properties of 1022 Carbon Steel Coil by Using the <span class="hlt">Taguchi</span> <span class="hlt">Method</span> to Optimize Spheroidized Annealing Conditions.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Chih-Cheng; Liu, Chang-Lun</p> <p>2016-08-12</p> <p>Cold forging is often <span class="hlt">applied</span> in the fastener industry. Wires in coil form are used as semi-finished products for the production of billets. This process usually requires preliminarily drawing wire coil in order to reduce the diameter of products. The wire usually has to be annealed to improve its cold formability. The quality of spheroidizing annealed wire affects the forming quality of screws. In the fastener industry, most companies use a subcritical process for spheroidized annealing. Various parameters affect the spheroidized annealing quality of steel wire, such as the spheroidized annealing temperature, prolonged heating time, furnace cooling time and flow rate of nitrogen (protective atmosphere). The effects of the spheroidized annealing parameters affect the quality characteristics of steel wire, such as the tensile strength and hardness. A series of experimental tests on AISI 1022 low carbon steel wire are carried out and the <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used to obtain optimum spheroidized annealing conditions to improve the mechanical properties of steel wires for cold forming. The results show that the spheroidized annealing temperature and prolonged heating time have the greatest effect on the mechanical properties of steel wires. A comparison between the results obtained using the optimum spheroidizing conditions and the measures using the original settings shows the new spheroidizing parameter settings effectively improve the performance measures over their value at the original settings. The results presented in this paper could be used as a reference for wire manufacturers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013MatSP..31..424P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013MatSP..31..424P"><span>Optimization of sol-gel technique for coating of metallic substrates by hydroxyapatite using the <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pourbaghi-Masouleh, M.; Asgharzadeh, H.</p> <p>2013-08-01</p> <p>In this study, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> of design of experiment (DOE) was used to optimize the hydroxyapatite (HA) coatings on various metallic substrates deposited by sol-gel dip-coating technique. The experimental design consisted of five factors including substrate material (A), surface preparation of substrate (B), dipping/withdrawal speed (C), number of layers (D), and calcination temperature (E) with three levels of each factor. An orthogonal array of L18 type with mixed levels of the control factors was utilized. The image processing of the micrographs of the coatings was conducted to determine the percentage of coated area ( PCA). Chemical and phase composition of HA coatings were studied by XRD, FT-IR, SEM, and EDS techniques. The analysis of variance (ANOVA) indicated that the PCA of HA coatings was significantly affected by the calcination temperature. The optimum conditions from signal-to-noise ( S/N) ratio analysis were A: pure Ti, B: polishing and etching for 24 h, C: 50 cm min-1, D: 1, and E: 300 °C. In the confirmation experiment using the optimum conditions, the HA coating with high PCA of 98.5 % was obtained.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...627761J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...627761J"><span>Thermochemical hydrolysis of macroalgae Ulva for biorefinery: <span class="hlt">Taguchi</span> robust design <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander</p> <p>2016-06-01</p> <p>Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using <span class="hlt">Taguchi</span> orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134-57.500 gr ethanol kg-1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008ITNS...55.2303K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008ITNS...55.2303K"><span><span class="hlt">Taguchi</span> Based Performance and Reliability Improvement of an Ion Chamber Amplifier for Enhanced Nuclear Reactor Safety</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kulkarni, R. D.; Agarwal, Vivek</p> <p>2008-08-01</p> <p>An ion chamber amplifier (ICA) is used as a safety device for neutronic power (flux) measurement in regulation and protection systems of nuclear reactors. Therefore, performance reliability of an ICA is an important issue. Appropriate quality engineering is essential to achieve a robust design and performance of the ICA circuit. It is observed that the low input bias current operational amplifiers used in the input stage of the ICA circuit are the most critical devices for proper functioning of the ICA. They are very sensitive to the gamma radiation present in their close vicinity. Therefore, the response of the ICA deteriorates with exposure to gamma radiation resulting in a decrease in the overall reliability, unless desired performance is ensured under all conditions. This paper presents a performance enhancement scheme for an ICA operated in the nuclear environment. The <span class="hlt">Taguchi</span> <span class="hlt">method</span>, which is a proven technique for reliability enhancement, has been used in this work. It is demonstrated that if a statistical, optimal design approach, like the <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used, the cost of high quality and reliability may be brought down drastically. The complete methodology and statistical calculations involved are presented, as are the experimental and simulation results to arrive at a robust design of the ICA.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18238115','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18238115"><span>Design of a robust fuzzy controller for the arc stability of CO(2) welding process using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Dongcheol; Rhee, Sehun</p> <p>2002-01-01</p> <p>CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JMEP...22.1149R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JMEP...22.1149R"><span><span class="hlt">Taguchi</span> Optimization of Pulsed Current GTA Welding Parameters for Improved Corrosion Resistance of 5083 Aluminum Welds</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rastkerdar, E.; Shamanian, M.; Saatchi, A.</p> <p>2013-04-01</p> <p>In this study, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used as a design of experiment (DOE) technique to optimize the pulsed current gas tungsten arc welding (GTAW) parameters for improved pitting corrosion resistance of AA5083-H18 aluminum alloy welds. A L9 (34) orthogonal array of the <span class="hlt">Taguchi</span> design was used, which involves nine experiments for four parameters: peak current ( P), base current ( B), percent pulse-on time ( T), and pulse frequency ( F) with three levels was used. Pitting corrosion resistance in 3.5 wt.% NaCl solution was evaluated by anodic polarization tests at room temperature and calculating the width of the passive region (∆ E pit). Analysis of variance (ANOVA) was performed on the measured data and S/ N (signal to noise) ratios. The "bigger is better" was selected as the quality characteristic (QC). The optimum conditions were found as 170 A, 85 A, 40%, and 6 Hz for P, B, T, and F factors, respectively. The study showed that the percent pulse-on time has the highest influence on the pitting corrosion resistance (50.48%) followed by pulse frequency (28.62%), peak current (11.05%) and base current (9.86%). The range of optimum ∆ E pit at optimum conditions with a confidence level of 90% was predicted to be between 174.81 and 177.74 mVSCE. Under optimum conditions, the confirmation test was carried out, and the experimental value of ∆ E pit of 176 mVSCE was in agreement with the predicted value from the <span class="hlt">Taguchi</span> model. In this regard, the model can be effectively used to predict the ∆ E pit of pulsed current gas tungsten arc welded joints.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_3");'>3</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li class="active"><span>5</span></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_5 --> <div id="page_6" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="101"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JMEP...21.1978Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JMEP...21.1978Y"><span>Optimization of Experimental Conditions of the Pulsed Current GTAW Parameters for Mechanical Properties of SDSS UNS S32760 Welds Based on the <span class="hlt">Taguchi</span> Design <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yousefieh, M.; Shamanian, M.; Saatchi, A.</p> <p>2012-09-01</p> <p><span class="hlt">Taguchi</span> design <span class="hlt">method</span> with L9 orthogonal array was implemented to optimize the pulsed current gas tungsten arc welding parameters for the hardness and the toughness of super duplex stainless steel (SDSS, UNS S32760) welds. In this regard, the hardness and the toughness were considered as performance characteristics. Pulse current, background current, % on time, and pulse frequency were chosen as main parameters. Each parameter was varied at three different levels. As a result of pooled analysis of variance, the pulse current is found to be the most significant factor for both the hardness and the toughness of SDSS welds by percentage contribution of 71.81 for hardness and 78.18 for toughness. The % on time (21.99%) and the background current (17.81%) had also the next most significant effect on the hardness and the toughness, respectively. The optimum conditions within the selected parameter values for hardness were found as the first level of pulse current (100 A), third level of background current (70 A), first level of % on time (40%), and first level of pulse frequency (1 Hz), while they were found as the second level of pulse current (120 A), second level of background current (60 A), second level of % on time (60%), and third level of pulse frequency (5 Hz) for toughness. The <span class="hlt">Taguchi</span> <span class="hlt">method</span> was found to be a promising tool to obtain the optimum conditions for such studies. Finally, in order to verify experimental results, confirmation tests were carried out at optimum working conditions. Under these conditions, there were good agreements between the predicted and the experimental results for the both hardness and toughness.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JMEP...24.4870L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JMEP...24.4870L"><span>Improved Stress Corrosion Cracking Resistance and Strength of a Two-Step Aged Al-Zn-Mg-Cu Alloy Using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lin, Lianghua; Liu, Zhiyi; Ying, Puyou; Liu, Meng</p> <p>2015-12-01</p> <p>Multi-step heat treatment effectively enhances the stress corrosion cracking (SCC) resistance but usually degrades the mechanical properties of Al-Zn-Mg-Cu alloys. With the aim to enhance SCC resistance as well as strength of Al-Zn-Mg-Cu alloys, we have optimized the process parameters during two-step aging of Al-6.1Zn-2.8Mg-1.9Cu alloy by <span class="hlt">Taguchi</span>'s L9 orthogonal array. In this work, analysis of variance (ANOVA) was performed to find out the significant heat treatment parameters. The slow strain rate testing combined with scanning electron microscope and transmission electron microscope was employed to study the SCC behaviors of Al-Zn-Mg-Cu alloy. Results showed that the contour map produced by ANOVA offered a reliable reference for selection of optimum heat treatment parameters. By using this <span class="hlt">method</span>, a desired combination of mechanical performances and SCC resistance was obtained.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29447441','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29447441"><span>Design factors of femur fracture fixation plates made of shape memory alloy based on the <span class="hlt">Taguchi</span> <span class="hlt">method</span> by finite element analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ko, Cheolwoong; Yang, Mikyung; Byun, Taemin; Lee, Sang-Wook</p> <p>2018-05-01</p> <p>This study proposed a way to design femur fracture fixation plates made of shape memory alloy based on computed tomography (CT) images of Korean cadaveric femurs. To this end, 3 major design factors of femur fracture fixation plates (circumference angle, thickness, and inner diameter) were selected based on the contact pressure when a femur fracture fixation plate was <span class="hlt">applied</span> to a cylinder model using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Then, the effects of the design factors were analyzed. It was shown that the design factors were statistically significant at a level of p = 0.05 concerning the inner diameter and the thickness. The factors affecting the contact pressure were inner diameter, thickness, and circumference angle, in that order. Particularly, in the condition of Case 9 (inner diameter 27 mm, thickness 2.4 mm, and circumference angle 270°), the max. average contact pressure was 21.721 MPa, while the min. average contact pressure was 3.118 MPa in Case 10 (inner diameter 29 mm, thickness 2.0 mm, and circumference angle 210°). When the femur fracture fixation plate was <span class="hlt">applied</span> to the cylinder model, the displacement due to external sliding and pulling forces was analyzed. As a result, the displacement in the sliding condition was at max. 3.75 times greater than that in the pulling condition, which indicated that the cohesion strength between the femur fracture fixation plate and the cylinder model was likely to be greater in the pulling condition. When a human femur model was <span class="hlt">applied</span>, the max. average contact pressure was 10.76 MPa, which was lower than the yield strength of a human femur (108 MPa). In addition, the analysis of the rib behaviors of the femur fracture fixation plate in relation to the recovery effect of the shape memory alloy showed that the rib behaviors varied depending on the arbitrarily curved shapes of the femur sections. Copyright © 2018 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4904202','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4904202"><span>Thermochemical hydrolysis of macroalgae Ulva for biorefinery: <span class="hlt">Taguchi</span> robust design <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander</p> <p>2016-01-01</p> <p>Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using <span class="hlt">Taguchi</span> orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134–57.500 gr ethanol kg−1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23818070','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23818070"><span>Evaluation of B. subtilis SPB1 biosurfactants' potency for diesel-contaminated soil washing: optimization of oil desorption using <span class="hlt">Taguchi</span> design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mnif, Inès; Sahnoun, Rihab; Ellouze-Chaabouni, Semia; Ghribi, Dhouha</p> <p>2014-01-01</p> <p>Low solubility of certain hydrophobic soil contaminants limits remediation process. Surface-active compounds can improve the solubility and removal of hydrophobic compounds from contaminated soils and, consequently, their biodegradation. Hence, this paper aims to study desorption efficiency of oil from soil of SPB1 lipopeptide biosurfactant. The effect of different physicochemical parameters on desorption potency was assessed. <span class="hlt">Taguchi</span> experimental design <span class="hlt">method</span> was <span class="hlt">applied</span> in order to enhance the desorption capacity and establish the best washing parameters. Mobilization potency was compared to those of chemical surfactants under the newly defined conditions. Better desorption capacity was obtained using 0.1% biosurfacatnt solution and the mobilization potency shows great tolerance to acidic and alkaline pH values and salinity. Results show an optimum value of oil removal from diesel-contaminated soil of about 87%. The optimum washing conditions for surfactant solution volume, biosurfactant concentration, agitation speed, temperature, and time were found to be 12 ml/g of soil, 0.1% biosurfactant, 200 rpm, 30 °C, and 24 h, respectively. The obtained results were compared to those of SDS and Tween 80 at the optimal conditions described above, and the study reveals an effectiveness of SPB1 biosurfactant comparable to the reported chemical emulsifiers. (1) The obtained findings suggest (a) the competence of Bacillus subtilis biosurfactant in promoting diesel desorption from soil towards chemical surfactants and (b) the applicability of this <span class="hlt">method</span> in decontaminating crude oil-contaminated soil and, therefore, improving bioavailability of hydrophobic compounds. (2) The obtained findings also suggest the adequacy of <span class="hlt">Taguchi</span> design in promoting process efficiency. Our findings suggest that preoptimized desorption process using microbial-derived emulsifier can contribute significantly to enhancement of hydrophobic pollutants' bioavailability. This study can be</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..225a2165V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..225a2165V"><span>Comparative Assessment of Cutting Inserts and Optimization during Hard Turning: <span class="hlt">Taguchi</span>-Based Grey Relational Analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Venkata Subbaiah, K.; Raju, Ch.; Suresh, Ch.</p> <p>2017-08-01</p> <p>The present study aims to compare the conventional cutting inserts with wiper cutting inserts during the hard turning of AISI 4340 steel at different workpiece hardness. Type of insert, hardness, cutting speed, feed, and depth of cut are taken as process parameters. Taguchi’s L18 orthogonal array was used to conduct the experimental tests. Parametric analysis carried in order to know the influence of each process parameter on the three important Surface Roughness Characteristics (Ra, Rz, and Rt) and Material Removal Rate. <span class="hlt">Taguchi</span> based Grey Relational Analysis (GRA) used to optimize the process parameters for individual response and multi-response outputs. Additionally, the analysis of variance (ANOVA) is also <span class="hlt">applied</span> to identify the most significant factor.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JIEI...11..459K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JIEI...11..459K"><span>Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by <span class="hlt">Taguchi</span> grey relational theory</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj</p> <p>2015-12-01</p> <p>Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of <span class="hlt">Taguchi</span> approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 <span class="hlt">Taguchi</span> design <span class="hlt">method</span>, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JAP...120f5304M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JAP...120f5304M"><span>Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mehedi, H.-A.; Baudrillart, B.; Alloyeau, D.; Mouhoub, O.; Ricolleau, C.; Pham, V. D.; Chacon, C.; Gicquel, A.; Lagoute, J.; Farhat, S.</p> <p>2016-08-01</p> <p>This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700-850 °C), molar concentration of methane (2%-20%), growth time (30-90 s), and microwave power (300-400 W) on graphene thickness and defect density is investigated using <span class="hlt">Taguchi</span> <span class="hlt">method</span> which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline quality and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2-7 high quality graphene layers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..342a2006A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..342a2006A"><span>Costing improvement of remanufacturing crankshaft by integrating Mahalanobis-<span class="hlt">Taguchi</span> System and Activity based Costing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abu, M. Y.; Nor, E. E. Mohd; Rahman, M. S. Abd</p> <p>2018-04-01</p> <p>Integration between quality and costing system is very crucial in order to achieve an accurate product cost and profit. Current practice by most of remanufacturers, there are still lacking on optimization during the remanufacturing process which contributed to incorrect variables consideration to the costing system. Meanwhile, traditional costing accounting being practice has distortion in the cost unit which lead to inaccurate cost of product. The aim of this work is to identify the critical and non-critical variables during remanufacturing process using Mahalanobis-<span class="hlt">Taguchi</span> System and simultaneously estimate the cost using Activity Based Costing <span class="hlt">method</span>. The orthogonal array was <span class="hlt">applied</span> to indicate the contribution of variables in the factorial effect graph and the critical variables were considered with overhead costs that are actually demanding the activities. This work improved the quality inspection together with costing system to produce an accurate profitability information. As a result, the cost per unit of remanufactured crankshaft of MAN engine model with 5 critical crankpins is MYR609.50 while Detroit engine model with 4 critical crankpins is MYR1254.80. The significant of output demonstrated through promoting green by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016E%26ES...36a2049W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016E%26ES...36a2049W"><span>2-[(Hydroxymethyl)amino]ethanol in water as a preservative: Study of formaldehyde released by <span class="hlt">Taguchi</span>'s <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wisessirikul, W.; Loykulnant, S.; Montha, S.; Fhulua, T.; Prapainainar, P.</p> <p>2016-06-01</p> <p>This research studied the quantity of free formaldehyde released from 2- [(hydroxymethyl)amino]ethanol (HAE) in DI water and natural rubber latex mixture using high-performance liquid chromatography (HPLC) technique. The quantity of formaldehyde retained in the solution was cross-checked by using titration technique. The investigated factors were the concentration of preservative (HAE), pH, and temperature. <span class="hlt">Taguchi</span>'s <span class="hlt">method</span> was used to design the experiments. The number of experiments was reduced to 16 experiments from all possible experiments by orthogonal arrays (3 factors and 4 levels in each factor). Minitab program was used as a tool for statistical calculation and for finding the suitable condition for the preservative system. HPLC studies showed that higher temperature and higher concentration of the preservative influence the amount of formaldehyde released. It was found that conditions at which formaldehyde was released in the lowest amount were 1.6%w/v HAE, 4 to 40 °C, and the original pH. Nevertheless, the pH value of NR latex should be more than 10 (the suitable pH value was found to be 13). This preservative can be used to replace current preservative systems and can maintain the quality of latex for long-term storage. Use of the proposed preservative system was also shown to have reduced impact on the toxicity of the environment.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5758948','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5758948"><span>Modeling and Multiresponse Optimization for Anaerobic Codigestion of Oil Refinery Wastewater and Chicken Manure by Using Artificial Neural Network and the <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hemmat, Abbas; Kafashan, Jalal; Huang, Hongying</p> <p>2017-01-01</p> <p>To study the optimum process conditions for pretreatments and anaerobic codigestion of oil refinery wastewater (ORWW) with chicken manure, L9 (34) <span class="hlt">Taguchi</span>'s orthogonal array was <span class="hlt">applied</span>. The biogas production (BGP), biomethane content (BMP), and chemical oxygen demand solubilization (CODS) in stabilization rate were evaluated as the process outputs. The optimum conditions were obtained by using Design Expert software (Version 7.0.0). The results indicated that the optimum conditions could be achieved with 44% ORWW, 36°C temperature, 30 min sonication, and 6% TS in the digester. The optimum BGP, BMP, and CODS removal rates by using the optimum conditions were 294.76 mL/gVS, 151.95 mL/gVS, and 70.22%, respectively, as concluded by the experimental results. In addition, the artificial neural network (ANN) technique was implemented to develop an ANN model for predicting BGP yield and BMP content. The Levenberg-Marquardt algorithm was utilized to train ANN, and the architecture of 9-19-2 for the ANN model was obtained. PMID:29441352</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1298..392M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1298..392M"><span>An Experimental Investigation into the Optimal Processing Conditions for the CO2 Laser Cladding of 20 MnCr5 Steel Using <span class="hlt">Taguchi</span> <span class="hlt">Method</span> and ANN</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mondal, Subrata; Bandyopadhyay, Asish.; Pal, Pradip Kumar</p> <p>2010-10-01</p> <p>This paper presents the prediction and evaluation of laser clad profile formed by means of CO2 laser <span class="hlt">applying</span> <span class="hlt">Taguchi</span> <span class="hlt">method</span> and the artificial neural network (ANN). Laser cladding is one of the surface modifying technologies in which the desired surface characteristics of any component can be achieved such as good corrosion resistance, wear resistance and hardness etc. Laser is used as a heat source to melt the anti-corrosive powder of Inconel-625 (Super Alloy) to give a coating on 20 MnCr5 substrate. The parametric study of this technique is also attempted here. The data obtained from experiments have been used to develop the linear regression equation and then to develop the neural network model. Moreover, the data obtained from regression equations have also been used as supporting data to train the neural network. The artificial neural network (ANN) is used to establish the relationship between the input/output parameters of the process. The established ANN model is then indirectly integrated with the optimization technique. It has been seen that the developed neural network model shows a good degree of approximation with experimental data. In order to obtain the combination of process parameters such as laser power, scan speed and powder feed rate for which the output parameters become optimum, the experimental data have been used to develop the response surfaces.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22597674-synthesis-graphene-cobalt-catalyzed-decomposition-methane-plasma-enhanced-cvd-optimization-experimental-parameters-taguchi-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22597674-synthesis-graphene-cobalt-catalyzed-decomposition-methane-plasma-enhanced-cvd-optimization-experimental-parameters-taguchi-method"><span>Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mehedi, H.-A.; Baudrillart, B.; Gicquel, A.</p> <p>2016-08-14</p> <p>This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700–850 °C), molar concentration of methane (2%–20%), growth time (30–90 s), and microwave power (300–400 W) on graphene thickness and defect density is investigated using <span class="hlt">Taguchi</span> <span class="hlt">method</span> which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline qualitymore » and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2–7 high quality graphene layers.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14528613','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14528613"><span>[Development of an optimized formulation of damask marmalade with low energy level using <span class="hlt">Taguchi</span> methodology].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Villarroel, Mario; Castro, Ruth; Junod, Julio</p> <p>2003-06-01</p> <p>The goal of this present study was the development of an optimized formula of damask marmalade low in calories <span class="hlt">applying</span> <span class="hlt">Taguchi</span> methodology to improve the quality of this product. The selection of this methodology lies on the fact that in real life conditions the result of an experiment frequently depends on the influence of several variables, therefore, one expedite way to solve this problem is utilizing factorial desings. The influence of acid, thickener, sweetener and aroma additives, as well as time of cooking, and possible interactions among some of them, were studied trying to get the best combination of these factors to optimize the sensorial quality of an experimental formulation of dietetic damask marmalade. An orthogonal array L8 (2(7)) was <span class="hlt">applied</span> in this experience, as well as level average analysis was carried out according <span class="hlt">Taguchi</span> methodology to determine the suitable working levels of the design factors previously choiced, to achieve a desirable product quality. A sensory trained panel was utilized to analyze the marmalade samples using a composite scoring test with a descriptive acuantitative scale ranging from 1 = Bad, 5 = Good. It was demonstrated that the design factors sugar/aspartame, pectin and damask aroma had a significant effect (p < 0.05) on the sensory quality of the marmalade with 82% of contribution on the response. The optimal combination result to be: citric acid 0.2%; pectin 1%; 30 g sugar/16 mg aspartame/100 g, damask aroma 0.5 ml/100 g, time of cooking 5 minutes. Regarding chemical composition, the most important results turned out to be the decrease in carbohydrate content compaired with traditional marmalade with a reduction of 56% in coloric value and also the amount of dietary fiber greater than similar commercial products. Assays of storage stability were carried out on marmalade samples submitted to different temperatures held in plastic bags of different density. Non percetible sensorial, microbiological and chemical changes</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JIEIC..98..607K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JIEIC..98..607K"><span>Optimization of Surface Roughness Parameters of Al-6351 Alloy in EDC Process: A <span class="hlt">Taguchi</span> Coupled Fuzzy Logic Approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar</p> <p>2017-10-01</p> <p>This paper investigates the application of <span class="hlt">Taguchi</span> <span class="hlt">method</span> with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A <span class="hlt">Taguchi</span> L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..318a2061M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..318a2061M"><span>Optimization of Recycled Glass Fibre-Reinforced Plastics Gear via Integration of the <span class="hlt">Taguchi</span> <span class="hlt">Method</span> and Grey Relational Analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mizamzul Mehat, Nik; Syuhada Zakarria, Noor; Kamaruddin, Shahrul</p> <p>2018-03-01</p> <p>The increase in demand for industrial gears has resulted in the increase in usage of plastic-matrix composites particularly glass fibre-reinforced plastics as the gear materials. The usage of these synthetic fibers is to enhance the mechanical strength and the thermal resistance of the plastic gears. Nevertheless, the production of large quantities of these synthetic fibre-reinforced composites poses a serious threat to the ecosystem. Comprehending to this fact, the present work aimed at investigating the effects of incorporating recycled glass fibre-reinforced plastics in various compositions particularly on dimensional stability and mechanical properties of gear produced with diverse injection moulding processing parameters setting. The integration of Grey relational analysis (GRA) and <span class="hlt">Taguchi</span> <span class="hlt">method</span> was adopted to evaluate the influence of recycled glass fibre-reinforced plastics and variation in processing parameters on gear quality. From the experimental results, the blending ratio was found as the most influential parameter of 56.0% contribution in both improving tensile properties as well as in minimizing shrinkage, followed by mould temperature of 24.1% contribution and cooling time of 10.6% contribution. The results obtained from the aforementioned work are expected to contribute to accessing the feasibility of using recycled glass fibre-reinforced plastics especially for gear application.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ApSS..344...57S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ApSS..344...57S"><span>Vertically aligned N-doped CNTs growth using <span class="hlt">Taguchi</span> experimental design</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Silva, Ricardo M.; Fernandes, António J. S.; Ferro, Marta C.; Pinna, Nicola; Silva, Rui F.</p> <p>2015-07-01</p> <p>The <span class="hlt">Taguchi</span> <span class="hlt">method</span> with a parameter design L9 orthogonal array was implemented for optimizing the nitrogen incorporation in the structure of vertically aligned N-doped CNTs grown by thermal chemical deposition (TCVD). The maximization of the ID/IG ratio of the Raman spectra was selected as the target value. As a result, the optimal deposition configuration was NH3 = 90 sccm, growth temperature = 825 °C and catalyst pretreatment time of 2 min, the first parameter having the main effect on nitrogen incorporation. A confirmation experiment with these values was performed, ratifying the predicted ID/IG ratio of 1.42. Scanning electron microscopy (SEM) characterization revealed a uniform completely vertically aligned array of multiwalled CNTs which individually exhibit a bamboo-like structure, consisting of periodically curved graphitic layers, as depicted by high resolution transmission electron microscopy (HRTEM). The X-ray photoelectron spectroscopy (XPS) results indicated a 2.00 at.% of N incorporation in the CNTs in pyridine-like and graphite-like, as the predominant species.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.908a2041J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.908a2041J"><span>Investigating the effects of PDC cutters geometry on ROP using the <span class="hlt">Taguchi</span> technique</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jamaludin, A. A.; Mehat, N. M.; Kamaruddin, S.</p> <p>2017-10-01</p> <p>At times, the polycrystalline diamond compact (PDC) bit’s performance dropped and affects the rate of penetration (ROP). The objective of this project is to investigate the effect of PDC cutter geometry and optimize them. An intensive study in cutter geometry would further enhance the ROP performance. The relatively extended analysis was carried out and four significant geometry factors have been identified that directly improved the ROP. Cutter size, back rake angle, side rake angle and chamfer angle are the stated geometry factors. An appropriate optimization technique that effectively controls all influential geometry factors during cutters manufacturing is introduced and adopted in this project. By adopting L9 <span class="hlt">Taguchi</span> OA, simulation experiment is conducted by using explicit dynamics finite element analysis. Through a structure <span class="hlt">Taguchi</span> analysis, ANOVA confirms that the most significant geometry to improve ROP is cutter size (99.16% percentage contribution). The optimized cutter is expected to drill with high ROP that can reduce the rig time, which in its turn, may reduce the total drilling cost.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26584152','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26584152"><span>Optimization of delignification of two Pennisetum grass species by NaOH pretreatment using <span class="hlt">Taguchi</span> and ANN statistical approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mohaptra, Sonali; Dash, Preeti Krishna; Behera, Sudhanshu Shekar; Thatoi, Hrudayanath</p> <p>2016-01-01</p> <p>In the bioconversion of lignocelluloses for bioethanol, pretreatment seems to be the most important step which improves the elimination of the lignin and hemicelluloses content, exposing cellulose to further hydrolysis. The present study discusses the application of dynamic statistical techniques like the <span class="hlt">Taguchi</span> <span class="hlt">method</span> and artificial neural network (ANN) in the optimization of pretreatment of lignocellulosic biomasses such as Hybrid Napier grass (HNG) (Pennisetum purpureum) and Denanath grass (DG) (Pennisetum pedicellatum), using alkali sodium hydroxide. This study analysed and determined a parameter combination with a low number of experiments by using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> in which both the substrates can be efficiently pretreated. The optimized parameters obtained from the L16 orthogonal array are soaking time (18 and 26 h), temperature (60°C and 55°C), and alkali concentration (1%) for HNG and DG, respectively. High performance liquid chromatography analysis of the optimized pretreated grass varieties confirmed the presence of glucan (47.94% and 46.50%), xylan (9.35% and 7.95%), arabinan (2.15% and 2.2%), and galactan/mannan (1.44% and 1.52%) for HNG and DG, respectively. Physicochemical characterization studies of native and alkali-pretreated grasses were carried out by scanning electron microscopy and Fourier transformation Infrared spectroscopy which revealed some morphological differences between the native and optimized pretreated samples. Model validation by ANN showed a good agreement between experimental results and the predicted responses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15853150','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15853150"><span>Application of the <span class="hlt">Taguchi</span> analytical <span class="hlt">method</span> for optimization of effective parameters of the chemical vapor deposition process controlling the production of nanotubes/nanobeads.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sharon, Maheshwar; Apte, P R; Purandare, S C; Zacharia, Renju</p> <p>2005-02-01</p> <p>Seven variable parameters of the chemical vapor deposition system have been optimized with the help of the <span class="hlt">Taguchi</span> analytical <span class="hlt">method</span> for getting a desired product, e.g., carbon nanotubes or carbon nanobeads. It is observed that almost all selected parameters influence the growth of carbon nanotubes. However, among them, the nature of precursor (racemic, R or Technical grade camphor) and the carrier gas (hydrogen, argon and mixture of argon/hydrogen) seem to be more important parameters affecting the growth of carbon nanotubes. Whereas, for the growth of nanobeads, out of seven parameters, only two, i.e., catalyst (powder of iron, cobalt, and nickel) and temperature (1023 K, 1123 K, and 1273 K), are the most influential parameters. Systematic defects or islands on the substrate surface enhance nucleation of novel carbon materials. Quantitative contributions of process parameters as well as optimum factor levels are obtained by performing analysis of variance (ANOVA) and analysis of mean (ANOM), respectively.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_4");'>4</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li class="active"><span>6</span></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_6 --> <div id="page_7" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="121"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JMEP...26.3901G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JMEP...26.3901G"><span>Furnace Brazing Parameters Optimized by <span class="hlt">Taguchi</span> <span class="hlt">Method</span> and Corrosion Behavior of Tube-Fin System of Automotive Condensers</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guía-Tello, J. C.; Pech-Canul, M. A.; Trujillo-Vázquez, E.; Pech-Canul, M. I.</p> <p>2017-08-01</p> <p>Controlled atmosphere brazing has a widespread industrial use in the production of aluminum automotive heat exchangers. Good-quality joints between the components depend on the initial condition of materials as well as on the brazing process parameters. In this work, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to optimize the brazing parameters with respect to corrosion performance for tube-fin mini-assemblies of an automotive condenser. The experimental design consisted of five factors (micro-channel tube type, flux type, peak temperature, heating rate and dwell time), with two levels each. The corrosion behavior in acidified seawater solution pH 2.8 was evaluated through potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) measurements. Scanning electron microscope (SEM) and energy-dispersive x-ray spectroscopy (EDS) were used to analyze the microstructural features in the joint zone. The results showed that the parameters that most significantly affect the corrosion rate are the type of flux and the peak temperature. The optimal conditions were: micro-channel tube with 4.2 g/m2 of zinc coating, standard flux, 610 °C peak temperature, 5 °C/min heating rate and 4 min dwell time. The corrosion current density value of the confirmation experiment is in excellent agreement with the predicted value. The electrochemical characterization for selected samples gave indication that the brazing conditions had a more significant effect on the kinetics of the hydrogen evolution reaction than on the kinetics of the metal dissolution reaction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27343435','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27343435"><span>Effect of olive mill waste addition on the properties of porous fired clay bricks using <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman</p> <p>2016-10-01</p> <p>Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, <span class="hlt">Taguchi</span> <span class="hlt">method</span>, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatSR...745297E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatSR...745297E"><span><span class="hlt">Applying</span> <span class="hlt">Taguchi</span> design and large-scale strategy for mycosynthesis of nano-silver from endophytic Trichoderma harzianum SYA.F4 and its application against phytopathogens</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>El-Moslamy, Shahira H.; Elkady, Marwa F.; Rezk, Ahmed H.; Abdel-Fattah, Yasser R.</p> <p>2017-03-01</p> <p>Development of reliable and low-cost requirement for large-scale eco-friendly biogenic synthesis of metallic nanoparticles is an important step for industrial applications of bionanotechnology. In the present study, the mycosynthesis of spherical nano-Ag (12.7 ± 0.8 nm) from extracellular filtrate of local endophytic T. harzianum SYA.F4 strain which have interested mixed bioactive metabolites (alkaloids, flavonoids, tannins, phenols, nitrate reductase (320 nmol/hr/ml), carbohydrate (25 μg/μl) and total protein concentration (2.5 g/l) was reported. Industrial mycosynthesis of nano-Ag can be induced with different characters depending on the fungal cultivation and physical conditions. <span class="hlt">Taguchi</span> design was <span class="hlt">applied</span> to improve the physicochemical conditions for nano-Ag production, and the optimum conditions which increased its mass weight 3 times larger than a basal condition were as follows: AgNO3 (0.01 M), diluted reductant (10 v/v, pH 5) and incubated at 30 °C, 200 rpm for 24 hr. Kinetic conversion rates in submerged batch cultivation in 7 L stirred tank bioreactor on using semi-defined cultivation medium was as follows: the maximum biomass production (Xmax) and maximum nano-Ag mass weight (Pmax) calculated (60.5 g/l and 78.4 g/l respectively). The best nano-Ag concentration that formed large inhibition zones was 100 μg/ml which showed against A.alternate (43 mm) followed by Helminthosporium sp. (35 mm), Botrytis sp. (32 mm) and P. arenaria (28 mm).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MMTB..tmp..945S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MMTB..tmp..945S"><span>Optimization of Quenching Parameters for the Reduction of Titaniferous Magnetite Ore by Lean Grade Coal Using the <span class="hlt">Taguchi</span> <span class="hlt">Method</span> and Its Isothermal Kinetic Study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sarkar, Bitan Kumar; Kumar, Nikhil; Dey, Rajib; Das, Gopes Chandra</p> <p>2018-06-01</p> <p>In the present study, a unique <span class="hlt">method</span> is adopted to achieve higher reducibility of titaniferous magnetite lump ore (TMO). In this <span class="hlt">method</span>, TMO is initially heated followed by water quenching. The quenching process generates cracks due to thermal shock in the dense TMO lumps, which, in turn, increases the extent of reduction (EOR) using the lean grade coal as a reductant. The optimum combination of parameters found by using <span class="hlt">Taguchi</span>'s L27 orthogonal array (OA) (five factors, three levels) is - 8 + 4 mm of particle size (PS1), 1423 K of quenching temperature (Qtemp2), 15 minutes of quenching time (Qtime3), 3 times the number of quenching {(No. of Q)3}, and 120 minutes of reduction time (Rtime3) at fixed reduction temperature of 1473 K. At optimized levels of the parameters, 92.39 pct reduction is achieved. Isothermal reduction kinetics of the quenched TMO lumps at the optimized condition reveals mixed controlled mechanisms [initially contracting geometry (CG3) followed by diffusion (D3)]. Activation energies calculated are 69.895 KJ/mole for CG3 and 39.084 KJ/mole for D3.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA273945','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA273945"><span>An Exploratory Survey of <span class="hlt">Methods</span> Used to Develop Measures of Performance</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1993-09-01</p> <p>E3 Genichi <span class="hlt">Taguchi</span> O3 Robert C. Camp 0 Kaoru Ishikawa 0 Dorsey J. Talley o Philip B. Crosby 0 J.M. Juran 0 Arthur R. Tenner 0 W. Edwards Deming 0...authored books or papers on the subject of quality? (Mark all that <span class="hlt">apply</span>) o Nancy Brady 0 H. James Harrington 03 Genichi <span class="hlt">Taguchi</span> o Robert C. Camp 0 Kaoru ... Ishikawa 03 Dorsey J. Talley 0 Philip B. Crosby 0 J.M. Juran 0 Arthur R. Tenner o W. Edwards Deming 0 Dennis Kinlaw 03 Hans J. Thamhain 0 Irving J</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24250648','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24250648"><span>Application of <span class="hlt">Taguchi</span> Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir</p> <p>2013-01-01</p> <p>For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of <span class="hlt">Taguchi</span> approach and Box-Behnken statistical design under resting cell conditions. <span class="hlt">Taguchi</span> design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by <span class="hlt">Taguchi</span> experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of <span class="hlt">Taguchi</span> design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23907063','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23907063"><span><span class="hlt">Taguchi</span> approach for co-gasification optimization of torrefied biomass and coal.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Wei-Hsin; Chen, Chih-Jung; Hung, Chen-I</p> <p>2013-09-01</p> <p>This study employs the <span class="hlt">Taguchi</span> <span class="hlt">method</span> to approach the optimum co-gasification operation of torrefied biomass (eucalyptus) and coal in an entrained flow gasifier. The cold gas efficiency is adopted as the performance index of co-gasification. The influences of six parameters, namely, the biomass blending ratio, oxygen-to-fuel mass ratio (O/F ratio), biomass torrefaction temperature, gasification pressure, steam-to-fuel mass ratio (S/F ratio), and inlet temperature of the carrier gas, on the performance of co-gasification are considered. The analysis of the signal-to-noise ratio suggests that the O/F ratio is the most important factor in determining the performance and the appropriate O/F ratio is 0.7. The performance is also significantly affected by biomass along with torrefaction, where a torrefaction temperature of 300°C is sufficient to upgrade eucalyptus. According to the recommended operating conditions, the values of cold gas efficiency and carbon conversion at the optimum co-gasification are 80.99% and 94.51%, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5368611','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5368611"><span><span class="hlt">Applying</span> <span class="hlt">Taguchi</span> design and large-scale strategy for mycosynthesis of nano-silver from endophytic Trichoderma harzianum SYA.F4 and its application against phytopathogens</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>EL-Moslamy, Shahira H.; Elkady, Marwa F.; Rezk, Ahmed H.; Abdel-Fattah, Yasser R.</p> <p>2017-01-01</p> <p>Development of reliable and low-cost requirement for large-scale eco-friendly biogenic synthesis of metallic nanoparticles is an important step for industrial applications of bionanotechnology. In the present study, the mycosynthesis of spherical nano-Ag (12.7 ± 0.8 nm) from extracellular filtrate of local endophytic T. harzianum SYA.F4 strain which have interested mixed bioactive metabolites (alkaloids, flavonoids, tannins, phenols, nitrate reductase (320 nmol/hr/ml), carbohydrate (25 μg/μl) and total protein concentration (2.5 g/l) was reported. Industrial mycosynthesis of nano-Ag can be induced with different characters depending on the fungal cultivation and physical conditions. <span class="hlt">Taguchi</span> design was <span class="hlt">applied</span> to improve the physicochemical conditions for nano-Ag production, and the optimum conditions which increased its mass weight 3 times larger than a basal condition were as follows: AgNO3 (0.01 M), diluted reductant (10 v/v, pH 5) and incubated at 30 °C, 200 rpm for 24 hr. Kinetic conversion rates in submerged batch cultivation in 7 L stirred tank bioreactor on using semi-defined cultivation medium was as follows: the maximum biomass production (Xmax) and maximum nano-Ag mass weight (Pmax) calculated (60.5 g/l and 78.4 g/l respectively). The best nano-Ag concentration that formed large inhibition zones was 100 μg/ml which showed against A.alternate (43 mm) followed by Helminthosporium sp. (35 mm), Botrytis sp. (32 mm) and P. arenaria (28 mm). PMID:28349997</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27254280','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27254280"><span>Optimization of process parameters for drilled hole quality characteristics during cortical bone drilling using <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singh, Gurmeet; Jain, Vivek; Gupta, Dheeraj; Ghai, Aman</p> <p>2016-09-01</p> <p>Orthopaedic surgery involves drilling of bones to get them fixed at their original position. The drilling process used in orthopaedic surgery is most likely to the mechanical drilling process and there is all likelihood that it may harm the already damaged bone, the surrounding bone tissue and nerves, and the peril is not limited at that. It is very much feared that the recovery of that part may be impeded so that it may not be able to sustain life long. To achieve sustainable orthopaedic surgery, a surgeon must try to control the drilling damage at the time of bone drilling. The area around the holes decides the life of bone joint and so, the contiguous area of drilled hole must be intact and retain its properties even after drilling. This study mainly focuses on optimization of drilling parameters like rotational speed, feed rate and the type of tool at three levels each used by <span class="hlt">Taguchi</span> optimization for surface roughness and material removal rate. The confirmation experiments were also carried out and results found with the confidence interval. Scanning electrode microscopy (SEM) images assisted in getting the micro level information of bone damage. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29680620','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29680620"><span>Processing of ultra-high molecular weight polyethylene/graphite composites by ultrasonic injection moulding: <span class="hlt">Taguchi</span> optimization.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sánchez-Sánchez, Xavier; Elias-Zuñiga, Alex; Hernández-Avila, Marcelo</p> <p>2018-06-01</p> <p>Ultrasonic injection moulding was confirmed as an efficient processing technique for manufacturing ultra-high molecular weight polyethylene (UHMWPE)/graphite composites. Graphite contents of 1 wt%, 5 wt%, and 7 wt% were mechanically pre-mixed with UHMWPE powder, and each mixture was pressed at 135 °C. A precise quantity of the pre-composites mixtures cut into irregularly shaped small pieces were subjected to ultrasonic injection moulding to fabricate small tensile specimens. The <span class="hlt">Taguchi</span> <span class="hlt">method</span> was <span class="hlt">applied</span> to achieve the optimal level of ultrasonic moulding parameters and to maximize the tensile strength of the composites; the results showed that mould temperature was the most significant parameter, followed by the graphite content and the plunger profile. The observed improvement in tensile strength in the specimen with 1 wt% graphite was of 8.8% and all composites showed an increase in the tensile modulus. Even though the presence of graphite produced a decrease in the crystallinity of all the samples, their thermal stability was considerably higher than that of pure UHMWPE. X-ray diffraction and scanning electron microscopy confirmed the exfoliation and dispersion of the graphite as a function of the ultrasonic processing. Fourier transform infrared spectra showed that the addition of graphite did not influence the molecular structure of the polymer matrix. Further, the ultrasonic energy led oxidative degradation and chain scission in the polymer. Copyright © 2018 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920012025','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920012025"><span>An Exploratory Exercise in <span class="hlt">Taguchi</span> Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Deal, Don E.</p> <p>1991-01-01</p> <p>The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of <span class="hlt">Taguchi</span> analysis as possible in the time allotted, and, secondly, to <span class="hlt">apply</span> the methodology to a design problem with the intention of establishing a preliminary set of near-optimal (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or optimize certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the <span class="hlt">Taguchi</span> statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the <span class="hlt">Taguchi</span> experiment array data suggested acceptable or close agreement between the two except at the predicted optimum</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19635663','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19635663"><span>Microcosm assays and <span class="hlt">Taguchi</span> experimental design for treatment of oil sludge containing high concentration of hydrocarbons.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Castorena-Cortés, G; Roldán-Carrillo, T; Zapata-Peñasco, I; Reyes-Avila, J; Quej-Aké, L; Marín-Cruz, J; Olguín-Lora, P</p> <p>2009-12-01</p> <p>Microcosm assays and <span class="hlt">Taguchi</span> experimental design was used to assess the biodegradation of an oil sludge produced by a gas processing unit. The study showed that the biodegradation of the sludge sample is feasible despite the high level of pollutants and complexity involved in the sludge. The physicochemical and microbiological characterization of the sludge revealed a high concentration of hydrocarbons (334,766+/-7001 mg kg(-1) dry matter, d.m.) containing a variety of compounds between 6 and 73 carbon atoms in their structure, whereas the concentration of Fe was 60,000 mg kg(-1) d.m. and 26,800 mg kg(-1) d.m. of sulfide. A <span class="hlt">Taguchi</span> L(9) experimental design comprising 4 variables and 3 levels moisture, nitrogen source, surfactant concentration and oxidant agent was performed, proving that moisture and nitrogen source are the major variables that affect CO(2) production and total petroleum hydrocarbons (TPH) degradation. The best experimental treatment yielded a TPH removal of 56,092 mg kg(-1) d.m. The treatment was carried out under the following conditions: 70% moisture, no oxidant agent, 0.5% of surfactant and NH(4)Cl as nitrogen source.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..184a2035M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..184a2035M"><span>Optimization of tribological performance of SiC embedded composite coating via <span class="hlt">Taguchi</span> analysis approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Akma, N.</p> <p>2017-03-01</p> <p>Tungsten inert gas (TIG) torch is one of the most recently used heat source for surface modification of engineering parts, giving similar results to the more expensive high power laser technique. In this study, ceramic-based embedded composite coating has been produced by precoated silicon carbide (SiC) powders on the AISI 4340 low alloy steel substrate using TIG welding torch process. A design of experiment based on <span class="hlt">Taguchi</span> approach has been adopted to optimize the TIG cladding process parameters. The L9 orthogonal array and the signal-to-noise was used to study the effect of TIG welding parameters such as arc current, travelling speed, welding voltage and argon flow rate on tribological response behaviour (wear rate, surface roughness and wear track width). The objective of the study was to identify optimal design parameter that significantly minimizes each of the surface quality characteristics. The analysis of the experimental results revealed that the argon flow rate was found to be the most influential factor contributing to the minimum wear and surface roughness of the modified coating surface. On the other hand, the key factor in reducing wear scar is the welding voltage. Finally, a convenient and economical <span class="hlt">Taguchi</span> approach used in this study was efficient to find out optimal factor settings for obtaining minimum wear rate, wear scar and surface roughness responses in TIG-coated surfaces.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JIEIC..98..479N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JIEIC..98..479N"><span>Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using <span class="hlt">Taguchi</span>'s Robust Design Methodology</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nath, Nayani Kishore</p> <p>2017-08-01</p> <p>The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using <span class="hlt">Taguchi</span>'s robust design methodology. In this <span class="hlt">method</span> four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of <span class="hlt">Taguchi</span>'s methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.885a2010T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.885a2010T"><span>Multi objective <span class="hlt">Taguchi</span> optimization approach for resistance spot welding of cold rolled TWIP steel sheets</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tutar, Mumin; Aydin, Hakan; Bayram, Ali</p> <p>2017-08-01</p> <p>Formability and energy absorption capability of a steel sheet are highly desirable properties in manufacturing components for automotive applications. TWinning Induced Plastisity (TWIP) steels are, new generation high Mn alloyed steels, attractive for the automotive industry due to its outstanding elongation (%40-45) and tensile strength (~1000MPa). So, TWIP steels provide excellent formability and energy absorption capability. Another required property from the steel sheets is suitability for manufacturing <span class="hlt">methods</span> such as welding. The use of the steel sheets in the automotive applications inevitably involves welding. Considering that there are 3000-5000 welded spots on a vehicle, it can be interpreted that one of the most important manufacturing <span class="hlt">method</span> is Resistance Spot Welding (RSW) for the automotive industry. In this study; firstly, TWIP steel sheet were cold rolled to 15% reduction in thickness. Then, the cold rolled TWIP steel sheets were welded with RSW <span class="hlt">method</span>. The welding parameters (welding current, welding time and electrode force) were optimized for maximizing the peak tensile shear load and minimizing the indentation of the joints using a <span class="hlt">Taguchi</span> L9 orthogonal array. The effect of welding parameters was also evaluated by examining the signal-to-noise ratio and analysis of variance (ANOVA) results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1855b0011N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1855b0011N"><span>Optimization of multi response in end milling process of ASSAB XW-42 tool steel with liquid nitrogen cooling using <span class="hlt">Taguchi</span>-grey relational analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Norcahyo, Rachmadi; Soepangkat, Bobby O. P.</p> <p>2017-06-01</p> <p>A research was conducted for the optimization of the end milling process of ASSAB XW-42 tool steel with multiple performance characteristics based on the orthogonal array with <span class="hlt">Taguchi</span>-grey relational analysis <span class="hlt">method</span>. Liquid nitrogen was <span class="hlt">applied</span> as a coolant. The experimental studies were conducted under varying the liquid nitrogen cooling flow rates (FL), and the end milling process variables, i.e., cutting speed (Vc), feeding speed (Vf), and axial depth of cut (Aa). The optimized multiple performance characteristics were surface roughness (SR), flank wear (VB), and material removal rate (MRR). An orthogonal array, signal-to-noise (S/N) ratio, grey relational analysis, grey relational grade, and analysis of variance were employed to study the multiple performance characteristics. Experimental results showed that flow rate gave the highest contribution for reducing the total variation of the multiple responses, followed by cutting speed, feeding speed, and axial depth of cut. The minimum surface roughness, flank wear, and maximum material removal rate could be obtained by using the values of flow rate, cutting speed, feeding speed, and axial depth of cut of 0.5 l/minute, 109.9 m/minute, 440 mm/minute, and 0.9 mm, respectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JNOPM..2150006M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JNOPM..2150006M"><span>Near Field and Far Field Effects in the <span class="hlt">Taguchi</span>-Optimized Design of AN InP/GaAs-BASED Double Wafer-Fused Mqw Long-Wavelength Vertical-Cavity Surface-Emitting Laser</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Menon, P. S.; Kandiah, K.; Mandeep, J. S.; Shaari, S.; Apte, P. R.</p> <p></p> <p>Long-wavelength VCSELs (LW-VCSEL) operating in the 1.55 μm wavelength regime offer the advantages of low dispersion and optical loss in fiber optic transmission systems which are crucial in increasing data transmission speed and reducing implementation cost of fiber-to-the-home (FTTH) access networks. LW-VCSELs are attractive light sources because they offer unique features such as low power consumption, narrow beam divergence and ease of fabrication for two-dimensional arrays. This paper compares the near field and far field effects of the numerically investigated LW-VCSEL for various design parameters of the device. The optical intensity profile far from the device surface, in the Fraunhofer region, is important for the optical coupling of the laser with other optical components. The near field pattern is obtained from the structure output whereas the far-field pattern is essentially a two-dimensional fast Fourier Transform (FFT) of the near-field pattern. Design parameters such as the number of wells in the multi-quantum-well (MQW) region, the thickness of the MQW and the effect of using <span class="hlt">Taguchi</span>'s orthogonal array <span class="hlt">method</span> to optimize the device design parameters on the near/far field patterns are evaluated in this paper. We have successfully increased the peak lasing power from an initial 4.84 mW to 12.38 mW at a bias voltage of 2 V and optical wavelength of 1.55 μm using <span class="hlt">Taguchi</span>'s orthogonal array. As a result of the <span class="hlt">Taguchi</span> optimization and fine tuning, the device threshold current is found to increase along with a slight decrease in the modulation speed due to increased device widths.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1717c0006M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1717c0006M"><span>Optimization of temperature and time for drying and carbonization to increase calorific value of coconut shell using <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Musabbikhah, Saptoadi, H.; Subarmono, Wibisono, M. A.</p> <p>2016-03-01</p> <p>Fossil fuel still dominates the needs of energy in Indonesia for the past few years. The increasing scarcity of oil and gas from non-renewable materials results in an energy crisis. This condition turns to be a serious problem for society which demands immediate solution. One effort which can be taken to overcome this problem is the utilization and processing of biomass as renewable energy by means of carbonization. Thus, it can be used as qualified raw material for production of briquette. In this research, coconut shell is used as carbonized waste. The research aims at improving the quality of coconut shell as the material for making briquettes as cheap and eco-friendly renewable energy. At the end, it is expected to decrease dependence on oil and gas. The research variables are drying temperature and time, carbonization time and temperature. The dependent variable is calorific value of the coconut shell. The <span class="hlt">method</span> used in this research is <span class="hlt">Taguchi</span> <span class="hlt">Method</span>. The result of the research shows thus variables, have a significant contribution on the increase of coconut shell's calorific value. It is proven that the higher thus variables are higher calorific value. Before carbonization, the average calorific value of coconut shell reaches 4,667 call/g, and a significant increase is notable after the carbonization. The optimization is parameter setting of A2B3C3D3, which means that the drying temperature is 105 °C, the drying time is 24 hours, the carbonization temperature is 650 °C and carbonization time is 120 minutes. The average calorific value is approximately 7,744 cal/g. Therefore, the increase of the coconut shell's calorific value after the carbonization is 3,077 cal/g or approximately 60 %. The charcoal of carbonized coconut shell has met the requirement of SNI, thus it can be used as raw material in making briquette which can eventually be used as cheap and environmental friendly fuel.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940009145','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940009145"><span>Multidisciplinary Design Techniques <span class="hlt">Applied</span> to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Olds, John Robert; Walberg, Gerald D.</p> <p>1993-01-01</p> <p>Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization <span class="hlt">methods</span> cannot always be <span class="hlt">applied</span>. Several multidisciplinary techniques and <span class="hlt">methods</span> were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization <span class="hlt">methods</span> is included. <span class="hlt">Methods</span> from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on <span class="hlt">methods</span> from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization <span class="hlt">methods</span> because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of <span class="hlt">Taguchi</span> <span class="hlt">methods</span>, central composite designs, and response surface <span class="hlt">methods</span> to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of <span class="hlt">Taguchi</span> <span class="hlt">methods</span> that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016TeEng..13....6G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016TeEng..13....6G"><span>Parametric Optimization Of Gas Metal Arc Welding Process By Using Grey Based <span class="hlt">Taguchi</span> <span class="hlt">Method</span> On Aisi 409 Ferritic Stainless Steel</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghosh, Nabendu; Kumar, Pradip; Nandi, Goutam</p> <p>2016-10-01</p> <p>Welding input process parameters play a very significant role in determining the quality of the welded joint. Only by properly controlling every element of the process can product quality be controlled. For better quality of MIG welding of Ferritic stainless steel AISI 409, precise control of process parameters, parametric optimization of the process parameters, prediction and control of the desired responses (quality indices) etc., continued and elaborate experiments, analysis and modeling are needed. A data of knowledge - base may thus be generated which may be utilized by the practicing engineers and technicians to produce good quality weld more precisely, reliably and predictively. In the present work, X-ray radiographic test has been conducted in order to detect surface and sub-surface defects of weld specimens made of Ferritic stainless steel. The quality of the weld has been evaluated in terms of yield strength, ultimate tensile strength and percentage of elongation of the welded specimens. The observed data have been interpreted, discussed and analyzed by considering ultimate tensile strength ,yield strength and percentage elongation combined with use of Grey-<span class="hlt">Taguchi</span> methodology.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JIEI....9...18M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JIEI....9...18M"><span>Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal</p> <p>2013-07-01</p> <p>The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Though the defects are reasonably minimized by <span class="hlt">Taguchi</span> <span class="hlt">method</span>, in order to achieve zero defects during the processes, genetic algorithm technique is <span class="hlt">applied</span> on the optimized parameters obtained by <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16671630','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16671630"><span>Response to <span class="hlt">Taguchi</span> and Noma on "relationship between directionality and orientation in drawings by young children and adults.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karev, George B</p> <p>2006-02-01</p> <p>When assessing the relationship between direction and orientation in drawings by young children and adults, <span class="hlt">Taguchi</span> and Noma used a fish-drawing task. However, the fish is not convenient enough as an object for such a task so it is highly preferable to use, instead of a single object, a set of several objects to assess directionality quantitatively. These authors' conclusions do not acknowledge alternative explanations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29862333','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29862333"><span><span class="hlt">Taguchi</span>-generalized regression neural network micro-screening for physical and sensory characteristics of bread.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Besseris, George J</p> <p>2018-03-01</p> <p>Generalized regression neural networks (GRNN) may act as crowdsourcing cognitive agents to screen small, dense and complex datasets. The concurrent screening and optimization of several complex physical and sensory traits of bread is developed using a structured <span class="hlt">Taguchi</span>-type micro-mining technique. A novel product outlook is offered to industrial operations to cover separate aspects of smart product design, engineering and marketing. Four controlling factors were selected to be modulated directly on a modern production line: 1) the dough weight, 2) the proofing time, 3) the baking time, and 4) the oven zone temperatures. Concentrated experimental recipes were programmed using the <span class="hlt">Taguchi</span>-type L 9 (3 4 ) OA-sampler to detect potentially non-linear multi-response tendencies. The fused behavior of the master-ranked bread characteristics behavior was smart sampled with GRNN-crowdsourcing and robust analysis. It was found that the combination of the oven zone temperatures to play a highly influential role in all investigated scenarios. Moreover, the oven zone temperatures and the dough weight appeared to be instrumental when attempting to synchronously adjusting all four physical characteristics. The optimal oven-zone temperature setting for concurrent screening-and-optimization was found to be 270-240 °C. The optimized (median) responses for loaf weight, moisture, height, width, color, flavor, crumb structure, softness, and elasticity are: 782 g, 34.8 %, 9.36 cm, 10.41 cm, 6.6, 7.2, 7.6, 7.3, and 7.0, respectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JIEI...13..215S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JIEI...13..215S"><span>A <span class="hlt">Taguchi</span> approach on optimal process control parameters for HDPE pipe extrusion process</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa</p> <p>2017-06-01</p> <p>High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using <span class="hlt">Taguchi</span> technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is <span class="hlt">applied</span> and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1943b0074H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1943b0074H"><span>Experimental wear behavioral studies of as-cast and 5 hr homogenized Al25Mg2Si2Cu4Ni alloy at constant load based on <span class="hlt">taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Harlapur, M. D.; Mallapur, D. G.; Udupa, K. Rajendra</p> <p>2018-04-01</p> <p>In the present study, an experimental study of the volumetric wear behaviour of Aluminium (Al-25Mg2Si2Cu4Ni) alloy in as cast and 5Hr homogenized with T6 heat treatment is carried out at constant load. The Pin on disc apparatus was used to carry out the sliding wear test. <span class="hlt">Taguchi</span> <span class="hlt">method</span> based on L-16 orthogonal array was employed to evaluate the data on the wear behavior. Signal-to-noise ratio among the objective of smaller the better and mean of means results were used. General regression model is obtained by correlation. Lastly confirmation test was completed to compose a comparison between the experimental results foreseen from the mention correlation. The mathematical model reveals the load has maximum contribution on the wear rate compared to speed. Scanning Electron Microscope was used to analyze the worn-out wear surfaces. Wear results show that 5Hr homogenized Al-25Mg2Si2Cu4Ni alloy samples with T6 treated had better volumetric wear resistance as compared to as cast samples.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..342a2005A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..342a2005A"><span>Integration of Mahalanobis-<span class="hlt">Taguchi</span> system and traditional cost accounting for remanufacturing crankshaft</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd</p> <p>2018-04-01</p> <p>Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-<span class="hlt">Taguchi</span> System was proven as a powerful <span class="hlt">method</span> of optimization that revealed the criticality of parameters. When subjected the <span class="hlt">method</span> to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26903773','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26903773"><span>An integrated <span class="hlt">Taguchi</span> and response surface methodological approach for the optimization of an HPLC <span class="hlt">method</span> to determine glimepiride in a supersaturatable self-nanoemulsifying formulation.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dash, Rajendra Narayan; Mohammed, Habibuddin; Humaira, Touseef</p> <p>2016-01-01</p> <p>We studied the application of <span class="hlt">Taguchi</span> orthogonal array (TOA) design during the development of an isocratic stability indicating HPLC <span class="hlt">method</span> for glimepiride as per TOA design; twenty-seven experiments were conducted by varying six chromatographic factors. Percentage of organic phase was the most significant (p < 0.001) on retention time, while buffer pH had the most significant (p < 0.001) effect on tailing factor and theoretical plates. TOA design has shortcoming, which identifies the only linear effect, while ignoring the quadratic and interaction effects. Hence, a response surface model for each response was created including the linear, quadratic and interaction terms. The developed models for each response found to be well predictive bearing an acceptable adjusted correlation coefficient (0.9152 for retention time, 0.8985 for tailing factor and 0.8679 for theoretical plates). The models were found to be significant (p < 0.001) having a high F value for each response (15.76 for retention time, 13.12 for tailing factor and 9.99 for theoretical plates). The optimal chromatographic condition uses acetonitrile - potassium dihydrogen phosphate (pH 4.0; 30 mM) (50:50, v/v) as the mobile phase. The temperature, flow rate and injection volume were selected as 35 ± 2 °C, 1.0 mL min(-1) and 20 μL respectively. The <span class="hlt">method</span> was validated as per ICH guidelines and was found to be specific for analyzing glimepiride from a novel supersaturatable self-nanoemulsifying formulation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28442004','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28442004"><span>Multi-Response Optimization of Process Parameters for Imidacloprid Removal by Reverse Osmosis Using <span class="hlt">Taguchi</span> Design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Genç, Nevim; Doğan, Esra Can; Narcı, Ali Oğuzhan; Bican, Emine</p> <p>2017-05-01</p> <p>  In this study, a multi-response optimization <span class="hlt">method</span> using <span class="hlt">Taguchi</span>'s robust design approach is proposed for imidacloprid removal by reverse osmosis. Tests were conducted with different membrane type (BW30, LFC-3, CPA-3), transmembrane pressure (TMP = 20, 25, 30 bar), volume reduction factor (VRF = 2, 3, 4), and pH (3, 7, 11). Quality and quantity of permeate are optimized with the multi-response characteristics of the total dissolved solid (TDS), conductivity, imidacloprid, and total organic carbon (TOC) rejection ratios and flux of permeate. The optimized conditions were determined as membrane type of BW30, TMP 30 bar, VRF 3, and pH 11. Under these conditions, TDS, conductivity, imidacloprid, and TOC rejections and permeate flux were 97.50 97.41, 97.80, 98.00% and 30.60 L/m2·h, respectively. Membrane type was obtained as the most effective factor; its contribution is 64%. The difference between the predicted and observed value of multi-response signal/noise (MRSN) is within the confidence interval.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JARS...10c5023I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JARS...10c5023I"><span>Assessing the transferability of a hybrid <span class="hlt">Taguchi</span>-objective function <span class="hlt">method</span> to optimize image segmentation for detecting and counting cave roosting birds using terrestrial laser scanning data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Idrees, Mohammed Oludare; Pradhan, Biswajeet; Buchroithner, Manfred F.; Shafri, Helmi Zulhaidi Mohd; Khairunniza Bejo, Siti</p> <p>2016-07-01</p> <p>As far back as early 15th century during the reign of the Ming Dynasty (1368 to 1634 AD), Gomantong cave in Sabah (Malaysia) has been known as one of the largest roosting sites for wrinkle-lipped bats (Chaerephon plicata) and swiftlet birds (Aerodramus maximus and Aerodramus fuciphagus) in very large colonies. Until recently, no study has been done to quantify or estimate the colony sizes of these inhabitants in spite of the grave danger posed to this avifauna by human activities and potential habitat loss to postspeleogenetic processes. This paper evaluates the transferability of a hybrid optimization image analysis-based <span class="hlt">method</span> developed to detect and count cave roosting birds. The <span class="hlt">method</span> utilizes high-resolution terrestrial laser scanning intensity image. First, segmentation parameters were optimized by integrating objective function and the statistical <span class="hlt">Taguchi</span> <span class="hlt">methods</span>. Thereafter, the optimized parameters were used as input into the segmentation and classification processes using two images selected from Simud Hitam (lower cave) and Simud Putih (upper cave) of the Gomantong cave. The result shows that the <span class="hlt">method</span> is capable of detecting birds (and bats) from the image for accurate population censusing. A total number of 9998 swiftlet birds were counted from the first image while 1132 comprising of both bats and birds were obtained from the second image. Furthermore, the transferability evaluation yielded overall accuracies of 0.93 and 0.94 (area under receiver operating characteristic curve) for the first and second image, respectively, with p value of <0.0001 at 95% confidence level. The findings indicate that the <span class="hlt">method</span> is not only efficient for the detection and counting cave birds for which it was developed for but also useful for counting bats; thus, it can be adopted in any cave.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..184a2018M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..184a2018M"><span>Abrasive wear response of TIG-melted TiC composite coating: <span class="hlt">Taguchi</span> approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Dube, A.</p> <p>2017-03-01</p> <p>In this study, <span class="hlt">Taguchi</span> design of experiment approach has been <span class="hlt">applied</span> to assess wear behaviour of TiC composite coatings deposited on AISI 4340 steel substrates by novel powder preplacement and TIG torch melting processes. To study the abrasive wear behaviour of these coatings against alumina ball at 600° C, a Taguchi’s orthogonal array is used to acquire the wear test data for determining optimal parameters that lead to the minimization of wear rate. Composite coatings are developed based on Taguchi’s L-16 orthogonal array experiment with three process parameters (welding current, welding speed, welding voltage and shielding gas flow rate) at four levels. In this technique, mean response and signal-to-noise ratio are used to evaluate the influence of the TIG process parameters on the wear rate performance of the composite coated surfaces. The results reveal that welding voltage is the most significant control parameter for minimizing wear rate while the current presents the least contribution to the wear rate reduction. The study also shows the best optimal condition has been arrived at A3 (90 A), B4 (2.5 mm/s), C3 (30 V) and D3 (20 L/min), which gives minimum wear rate in TiC embedded coatings. Finally, a confirmatory experiment has been conducted to verify the optimized result and shows that the error between the predicted values and the experimental observation at the optimal condition lies within the limit of 4.7 %. Thus, the validity of the optimum condition for the coatings is established.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015OptLE..67...94R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015OptLE..67...94R"><span>Parameters optimization of laser brazing in crimping butt using <span class="hlt">Taguchi</span> and BPNN-GA</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu</p> <p>2015-04-01</p> <p>The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by <span class="hlt">Taguchi</span> L25 orthogonal array through the statistical design <span class="hlt">method</span>. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid <span class="hlt">method</span>, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JIEIC..98..541S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JIEIC..98..541S"><span>An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and <span class="hlt">Taguchi</span> Orthogonal Array</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh</p> <p>2017-10-01</p> <p>P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a <span class="hlt">Taguchi</span> L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, <span class="hlt">Taguchi</span>'s <span class="hlt">method</span> was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT.......344M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT.......344M"><span>Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de <span class="hlt">Taguchi</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meseguer Valdenebro, Jose Luis</p> <p></p> <p>Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical <span class="hlt">methods</span> most commonly used today are three: finite difference <span class="hlt">method</span>, finite element <span class="hlt">method</span> and finite volume <span class="hlt">method</span>. The most widely used numerical <span class="hlt">method</span> for the modeling of welded joints is the finite element <span class="hlt">method</span> because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element <span class="hlt">method</span> as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by <span class="hlt">applying</span> the <span class="hlt">method</span> of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an optimization has been conducted by means of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for welding parameters in order to obtain an</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ResPh...9..987G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ResPh...9..987G"><span>Effect of injection parameters on mechanical and physical properties of super ultra-thin wall propylene packaging by <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ginghtong, Thatchanok; Nakpathomkun, Natthapon; Pechyen, Chiravoot</p> <p>2018-06-01</p> <p>The parameters of the plastic injection molding process have been investigated for the manufacture of a 64 oz. ultra-thin polypropylene bucket. The 3 main parameters, such as injection speed, melting temperature, holding pressure, were investigated to study their effect on the physical appearance and compressive strength. The orthogonal array of <span class="hlt">Taguchi</span>'s L9 (33) was used to carry out the experimental plan. The physical properties were measured and the compressive strength was determined using linear regression analysis. The differential scanning calorimeter (DSC) was used to analyze the crystalline structure of the product. The optimization results show that the proposed approach can help engineers identify optimal process parameters and achieve competitive advantages of energy consumption and product quality. In addition, the injection molding of the product includes 24 mm of shot stroke, 1.47 mm position transfer, 268 rpm screw speed, injection speed 100 mm/s, 172 ton clamping force, 800 kgf holding pressure, 0.9 s holding time and 1.4 s cooling time, make the products in the shape and proportion of the product satisfactory. The parameters of influence are injection speed 71.07%, melting temperature 23.31% and holding pressure 5.62%, respectively. The compressive strength of the product was able to withstand a pressure of up to 839 N before the product became plastic. The low melting temperature was caused by the superior crystalline structure of the super-ultra-thin wall product which leads to a lower compressive strength.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AIPC.1315..993G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AIPC.1315..993G"><span>Application of <span class="hlt">Taguchi</span> <span class="hlt">Method</span> for Analyzing Factors Affecting the Performance of Coated Carbide Tool When Turning FCD700 in Dry Cutting Condition</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghani, Jaharah A.; Mohd Rodzi, Mohd Nor Azmi; Zaki Nuawi, Mohd; Othman, Kamal; Rahman, Mohd. Nizam Ab.; Haron, Che Hassan Che; Deros, Baba Md</p> <p>2011-01-01</p> <p>Machining is one of the most important manufacturing processes in these modern industries especially for finishing an automotive component after the primary manufacturing processes such as casting and forging. In this study the turning parameters of dry cutting environment (without air, normal air and chilled air), various cutting speed, and feed rate are evaluated using a <span class="hlt">Taguchi</span> optimization methodology. An orthogonal array L27 (313), signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to analyze the effect of these turning parameters on the performance of a coated carbide tool. The results show that the tool life is affected by the cutting speed, feed rate and cutting environment with contribution of 38%, 32% and 27% respectively. Whereas for the surface roughness, the feed rate is significantly controlled the machined surface produced by 77%, followed by the cutting environment of 19%. The cutting speed is found insignificant in controlling the machined surface produced. The study shows that the dry cutting environment factor should be considered in order to produce longer tool life as well as for obtaining a good machined surface.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15812798','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15812798"><span>Anaerobic treatment of complex chemical wastewater in a sequencing batch biofilm reactor: process optimization and evaluation of factor interactions using the <span class="hlt">Taguchi</span> dynamic DOE methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N</p> <p>2005-06-20</p> <p>The <span class="hlt">Taguchi</span> robust experimental design (DOE) methodology has been <span class="hlt">applied</span> on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The <span class="hlt">Taguchi</span> methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014RJPCA..88.1241G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014RJPCA..88.1241G"><span>Preparation of photocatalytic ZnO nanoparticles and application in photochemical degradation of betamethasone sodium phosphate using <span class="hlt">taguchi</span> approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Giahi, M.; Farajpour, G.; Taghavi, H.; Shokri, S.</p> <p>2014-07-01</p> <p>In this study, ZnO nanoparticles were prepared by a sol-gel <span class="hlt">method</span> for the first time. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to identify the several factors that may affect degradation percentage of betamethasone sodium phosphate in wastewater in UV/K2S2O8/nano-ZnO system. Our experimental design consisted of testing five factors, i.e., dosage of K2S2O8, concentration of betamethasone sodium phosphate, amount of ZnO, irradiation time and initial pH. With four levels of each factor tested. It was found that, optimum parameters are irradiation time, 180 min; pH 9.0; betamethasone sodium phosphate, 30 mg/L; amount of ZnO, 13 mg; K2S2O8, 1 mM. The percentage contribution of each factor was determined by the analysis of variance (ANOVA). The results showed that irradiation time; pH; amount of ZnO; drug concentration and dosage of K2S2O8 contributed by 46.73, 28.56, 11.56, 6.70, and 6.44%, respectively. Finally, the kinetics process was studied and the photodegradation rate of betamethasone sodium phosphate was found to obey pseudo-first-order kinetics equation represented by the Langmuir-Hinshelwood model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1855b0015S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1855b0015S"><span>Application of <span class="hlt">Taguchi</span>-grey <span class="hlt">method</span> to optimize drilling of EMS 45 steel using minimum quantity lubrication (MQL) with multiple performance characteristics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Soepangkat, Bobby O. P.; Suhardjono, Pramujati, Bambang</p> <p>2017-06-01</p> <p>Machining under minimum quantity lubrication (MQL) has drawn the attention of researchers as an alternative to the traditionally used wet and dry machining conditions with the purpose to minimize the cooling and lubricating cost, as well as to reduce cutting zone temperature, tool wear, and hole surface roughness. Drilling is one of the important operations to assemble machine components. The objective of this study was to optimize drilling parameters such as cutting feed and cutting speed, drill type and drill point angle on the thrust force, torque, hole surface roughness and tool flank wear in drilling EMS 45 tool steel using MQL. In this study, experiments were carried out as per <span class="hlt">Taguchi</span> design of experiments while an L18 orthogonal array was used to study the influence of various combinations of drilling parameters and tool geometries on the thrust force, torque, hole surface roughness and tool flank wear. The optimum drilling parameters was determined by using grey relational grade obtained from grey relational analysis for multiple-performance characteristics. The drilling experiments were carried out by using twist drill and CNC machining center. This work is useful for optimum values selection of various drilling parameters and tool geometries that would not only minimize the thrust force and torque, but also reduce hole surface roughness and tool flank wear.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=applied+AND+research&pg=3&id=EJ963391','ERIC'); return false;" href="https://eric.ed.gov/?q=applied+AND+research&pg=3&id=EJ963391"><span>Reflections on Mixing <span class="hlt">Methods</span> in <span class="hlt">Applied</span> Linguistics Research</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hashemi, Mohammad R.</p> <p>2012-01-01</p> <p>This commentary advocates the use of mixed <span class="hlt">methods</span> research--that is the integration of qualitative and quantitative <span class="hlt">methods</span> in a single study--in <span class="hlt">applied</span> linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing <span class="hlt">methods</span> as a new trend in <span class="hlt">applied</span> linguistics are put forward.…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28898905','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28898905"><span><span class="hlt">Taguchi</span> Experimental Design for Optimization of Recombinant Human Growth Hormone Production in CHO Cell Lines and Comparing its Biological Activity with Prokaryotic Growth Hormone.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Aghili, Zahra Sadat; Zarkesh-Esfahani, Sayyed Hamid</p> <p>2018-02-01</p> <p>Growth hormone deficiency results in growth retardation in children and the GH deficiency syndrome in adults and they need to receive recombinant-GH in order to rectify the GH deficiency symptoms. Mammalian cells have become the favorite system for production of recombinant proteins for clinical application compared to prokaryotic systems because of their capability for appropriate protein folding, assembly, post-translational modification and proper signal. However, production level in mammalian cells is generally low compared to prokaryotic hosts. <span class="hlt">Taguchi</span> has established orthogonal arrays to describe a large number of experimental situations mainly to reduce experimental errors and to enhance the efficiency and reproducibility of laboratory experiments.In the present study, rhGH was produced in CHO cells and production of rhGH was assessed using Dot blotting, western blotting and Elisa assay. For optimization of rhGH production in CHO cells using <span class="hlt">Taguchi</span> <span class="hlt">method</span> An M16 orthogonal experimental design was used to investigate four different culture components. The biological activity of rhGH was assessed using LHRE-TK-Luciferase reporter gene system in HEK-293 and compared to the biological activity of prokaryotic rhGH.A maximal productivity of rhGH was reached in the conditions of 1%DMSO, 1%glycerol, 25 µM ZnSO 4 and 0 mM NaBu. Our findings indicate that control of culture conditions such as the addition of chemical components helps to develop an efficient large-scale and industrial process for the production of rhGH in CHO cells. Results of bioassay indicated that rhGH produced by CHO cells is able to induce GH-mediated intracellular cell signaling and showed higher bioactivity when compared to prokaryotic GH at the same concentrations. © Georg Thieme Verlag KG Stuttgart · New York.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1943b0063V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1943b0063V"><span>Comparative study of coated and uncoated tool inserts with dry machining of EN47 steel using <span class="hlt">Taguchi</span> L9 optimization technique</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vasu, M.; Shivananda, Nayaka H.</p> <p>2018-04-01</p> <p>EN47 steel samples are machined on a self-centered lathe using Chemical Vapor Deposition of coated TiCN/Al2O3/TiN and uncoated tungsten carbide tool inserts, with nose radius 0.8mm. Results are compared with each other and optimized using statistical tool. Input (cutting) parameters that are considered in this work are feed rate (f), cutting speed (Vc), and depth of cut (ap), the optimization criteria are based on the <span class="hlt">Taguchi</span> (L9) orthogonal array. ANOVA <span class="hlt">method</span> is adopted to evaluate the statistical significance and also percentage contribution for each model. Multiple response characteristics namely cutting force (Fz), tool tip temperature (T) and surface roughness (Ra) are evaluated. The results discovered that coated tool insert (TiCN/Al2O3/TiN) exhibits 1.27 and 1.29 times better than the uncoated tool insert for tool tip temperature and surface roughness respectively. A slight increase in cutting force was observed for coated tools.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27411334','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27411334"><span>Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using <span class="hlt">Taguchi</span> approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar</p> <p>2016-12-01</p> <p>Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using <span class="hlt">Taguchi</span> orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by <span class="hlt">Taguchi</span> orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26964963','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26964963"><span>Ultrasonically assisted hydrothermal synthesis of activated carbon-HKUST-1-MOF hybrid for efficient simultaneous ultrasound-assisted removal of ternary organic dyes and antibacterial investigation: <span class="hlt">Taguchi</span> optimization.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Azad, F Nasiri; Ghaedi, M; Dashtian, K; Hajati, S; Pezeshkpour, V</p> <p>2016-07-01</p> <p>Activated carbon (AC) composite with HKUST-1 metal organic framework (AC-HKUST-1 MOF) was prepared by ultrasonically assisted hydrothermal <span class="hlt">method</span> and characterized by FTIR, SEM and XRD analysis and laterally was <span class="hlt">applied</span> for the simultaneous ultrasound-assisted removal of crystal violet (CV), disulfine blue (DSB) and quinoline yellow (QY) dyes in their ternary solution. In addition, this material, was screened in vitro for their antibacterial actively against Methicillin-resistant Staphylococcus aureus (MRSA) and Pseudomonas aeruginosa (PAO1) bacteria. In dyes removal process, the effects of important variables such as initial concentration of dyes, adsorbent mass, pH and sonication time on adsorption process optimized by <span class="hlt">Taguchi</span> approach. Optimum values of 4, 0.02 g, 4 min, 10 mg L(-1) were obtained for pH, AC-HKUST-1 MOF mass, sonication time and the concentration of each dye, respectively. At the optimized condition, the removal percentages of CV, DSB and QY were found to be 99.76%, 91.10%, and 90.75%, respectively, with desirability of 0.989. Kinetics of adsorption processes follow pseudo-second-order model. The Langmuir model as best <span class="hlt">method</span> with high applicability for representation of experimental data, while maximum mono layer adsorption capacity for CV, DSB and QY on AC-HKUST-1 estimated to be 133.33, 129.87 and 65.37 mg g(-1) which significantly were higher than HKUST-1 as sole material with Qm to equate 59.45, 57.14 and 38.80 mg g(-1), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HMT...tmp...87C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HMT...tmp...87C"><span>Thermal design, rating and second law analysis of shell and tube condensers based on <span class="hlt">Taguchi</span> optimization for waste heat recovery based thermal desalination plants</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chandrakanth, Balaji; Venkatesan, G; Prakash Kumar, L. S. S; Jalihal, Purnima; Iniyan, S</p> <p>2018-03-01</p> <p>The present work discusses the design and selection of a shell and tube condenser used in Low Temperature Thermal Desalination (LTTD). To optimize the key geometrical and process parameters of the condenser with multiple parameters and levels, a design of an experiment approach using <span class="hlt">Taguchi</span> <span class="hlt">method</span> was chosen. An orthogonal array (OA) of 25 designs was selected for this study. The condenser was designed, analysed using HTRI software and the heat transfer area with respective tube side pressure drop were computed using the same, as these two objective functions determine the capital and running cost of the condenser. There was a complex trade off between the heat transfer area and pressure drop in the analysis, however second law analysis was worked out for determining the optimal heat transfer area vs pressure drop for condensing the required heat load.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23155599','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23155599"><span>[Montessori <span class="hlt">method</span> <span class="hlt">applied</span> to dementia - literature review].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Brandão, Daniela Filipa Soares; Martín, José Ignacio</p> <p>2012-06-01</p> <p>The Montessori <span class="hlt">method</span> was initially <span class="hlt">applied</span> to children, but now it has also been <span class="hlt">applied</span> to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this <span class="hlt">method</span> using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori <span class="hlt">method</span>. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this <span class="hlt">method</span> as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori <span class="hlt">method</span> may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this <span class="hlt">method</span> with high levels of control, such as the presence of several control groups or a double-blind study.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986mupz.book.....G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986mupz.book.....G"><span>The averaging <span class="hlt">method</span> in <span class="hlt">applied</span> problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grebenikov, E. A.</p> <p>1986-04-01</p> <p>The totality of <span class="hlt">methods</span>, allowing to research complicated non-linear oscillating systems, named in the literature "averaging <span class="hlt">method</span>" has been given. THe author is describing the constructive part of this <span class="hlt">method</span>, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of <span class="hlt">applied</span> mathematics and mechanics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5020..215R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5020..215R"><span>Optimized selection of benchmark test parameters for image watermark algorithms based on <span class="hlt">Taguchi</span> <span class="hlt">methods</span> and corresponding influence on design decisions for real-world applications</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodriguez, Tony F.; Cushman, David A.</p> <p>2003-06-01</p> <p>With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A <span class="hlt">Taguchi</span> Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/4019980','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/biblio/4019980"><span><span class="hlt">METHOD</span> OF <span class="hlt">APPLYING</span> METALLIC COATINGS</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Robinson, J.W.; Eubank, L.D.</p> <p>1961-08-01</p> <p>A <span class="hlt">method</span> for <span class="hlt">applying</span> a protective coating to a uranium rod is described. The steps include preheating the unanium rod to the coating temperature, placement of the rod between two rotating rollers, pouring a coating metal such as aluminum-silicon in molten form between one of the rotating rollers and the uranium rod, and rotating the rollers continually until the coating is built up to the desired thickness. (AEC)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhDT.......471S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhDT.......471S"><span>Observation-Driven Configuration of Complex Software Systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sage, Aled</p> <p>2010-06-01</p> <p>The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on <span class="hlt">Taguchi</span> <span class="hlt">Methods</span>. These statistical <span class="hlt">methods</span> have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, <span class="hlt">applying</span> the combination of ACT and <span class="hlt">Taguchi</span> <span class="hlt">Methods</span> to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of <span class="hlt">Taguchi</span> <span class="hlt">Methods</span> for configuring complex software systems. <span class="hlt">Taguchi</span> <span class="hlt">Methods</span> were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25942836','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25942836"><span>Influence of process parameters on the content of biomimetic calcium phosphate coating on titanium: a <span class="hlt">Taguchi</span> analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thammarakcharoen, Faungchat; Suvannapruk, Waraporn; Suwanprateeb, Jintamai</p> <p>2014-10-01</p> <p>In this study, a statistical design of experimental methodology based on <span class="hlt">Taguchi</span> orthogonal design has been used to study the effect of various processing parameters on the amount of calcium phosphate coating produced by such technique. Seven control factors with three levels each including sodium hydroxide concentration, pretreatment temperature, pretreatment time, cleaning <span class="hlt">method</span>, coating time, coating temperature and surface area to solution volume ratio were studied. X-ray diffraction revealed that all the coatings consisted of the mixture of octacalcium phosphate (OCP) and hydroxyapatite (HA) and the presence of each phase depended on the process conditions used. Various content and size (-1-100 μm) of isolated spheroid particles with nanosized plate-like morphology deposited on the titanium surface or a continuous layer of plate-like nanocrystals having the plate thickness in the range of -100-300 nm and the plate width in the range of 3-8 μm were formed depending on the process conditions employed. The optimum condition of using sodium hydroxide concentration of 1 M, pretreatment temperature of 70 degrees C, pretreatment time of 24 h, cleaning by ultrasonic, coating time of 6 h, coating temperature of 50 degrees C and surface area to solution volume ratio of 32.74 for producing the greatest amount of the coating formed on the titanium surface was predicted and validated. In addition, coating temperature was found to be the dominant factor with the greatest contribution to the coating formation while coating time and cleaning <span class="hlt">method</span> were significant factors. Other factors had negligible effects on the coating performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.944a2072K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.944a2072K"><span>Electronic-projecting Moire <span class="hlt">method</span> <span class="hlt">applying</span> CBR-technology</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.</p> <p>2018-01-01</p> <p>Electronic-projecting <span class="hlt">method</span> based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on <span class="hlt">applying</span> case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is <span class="hlt">applied</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=Example+AND+methodological+AND+research&pg=4&id=EJ1090654','ERIC'); return false;" href="https://eric.ed.gov/?q=Example+AND+methodological+AND+research&pg=4&id=EJ1090654"><span>Building "<span class="hlt">Applied</span> Linguistic Historiography": Rationale, Scope, and <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Smith, Richard</p> <p>2016-01-01</p> <p>In this article I argue for the establishment of "<span class="hlt">Applied</span> Linguistic Historiography" (ALH), that is, a new domain of enquiry within <span class="hlt">applied</span> linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and <span class="hlt">methods</span> in turn, I provide reasons why ALH is needed and…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=mm&pg=5&id=EJ942900','ERIC'); return false;" href="https://eric.ed.gov/?q=mm&pg=5&id=EJ942900"><span>Single-Case Designs and Qualitative <span class="hlt">Methods</span>: <span class="hlt">Applying</span> a Mixed <span class="hlt">Methods</span> Research Perspective</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith</p> <p>2010-01-01</p> <p>The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative <span class="hlt">methods</span>, hereafter referred to as a single-case mixed <span class="hlt">methods</span> design (SCD-MM). Minimal attention has been given to the topic of <span class="hlt">applying</span> qualitative <span class="hlt">methods</span> to SCD work in the literature. These two…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1790o0004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1790o0004S"><span><span class="hlt">Applying</span> an analytical <span class="hlt">method</span> to study neutron behavior for dosimetry</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shirazi, S. A. Mousavi</p> <p>2016-12-01</p> <p>In this investigation, a new dosimetry process is studied by <span class="hlt">applying</span> an analytical <span class="hlt">method</span>. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical <span class="hlt">method</span> in all constituent elements of liver tissue. This analytical <span class="hlt">method</span> is introduced <span class="hlt">applying</span> mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This <span class="hlt">method</span> can be <span class="hlt">applied</span> to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28390015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28390015"><span>Approaches towards the enhanced production of Rapamycin by Streptomyces hygroscopicus MTCC 4003 through mutagenesis and optimization of process parameters by <span class="hlt">Taguchi</span> orthogonal array methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dutta, Subhasish; Basak, Bikram; Bhunia, Biswanath; Sinha, Ankan; Dey, Apurba</p> <p>2017-05-01</p> <p>The present research was conducted to define the approaches for enhanced production of rapamycin (Rap) by Streptomyces hygroscopicus microbial type culture collection (MTCC) 4003. Both physical mutagenesis by ultraviolet ray (UV) and chemical mutagenesis by N-methyl-N-nitro-N-nitrosoguanidine (NTG) have been <span class="hlt">applied</span> successfully for the improvement of Rap production. Enhancing Rap yield by novel sequential UV mutagenesis technique followed by fermentation gives a significant difference in getting economically scalable amount of this industrially important macrolide compound. Mutant obtained through NTG mutagenesis (NTG-30-27) was found to be superior to others as it initially produced 67% higher Rap than wild type. Statistical optimization of nutritional and physiochemical parameters was carried out to find out most influential factors responsible for enhanced Rap yield by NTG-30-27 which was performed using <span class="hlt">Taguchi</span> orthogonal array approach. Around 72% enhanced production was achieved with nutritional factors at their assigned level at 23 °C, 120 rpm and pH 7.6. Results were analysed in triplicate basis where validation and purification was carried out using high performance liquid chromatography. Stability study and potency of extracted Rap was supported by turbidimetric assay taking Candida albicans MTCC 227 as test organism.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPCS..110..409K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPCS..110..409K"><span>Tribological behaviour predictions of r-GO reinforced Mg composite using ANN coupled <span class="hlt">Taguchi</span> approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kavimani, V.; Prakash, K. Soorya</p> <p>2017-11-01</p> <p>This paper deals with the fabrication of reduced graphene oxide (r-GO) reinforced Magnesium Metal Matrix Composite (MMC) through a novel solvent based powder metallurgy route. Investigations over basic and functional properties of developed MMC reveals that addition of r-GO improvises the microhardness upto 64 HV but however decrement in specific wear rate is also notified. Visualization of worn out surfaces through SEM images clearly explains for the occurrence of plastic deformation and the presence of wear debris because of ploughing out action. <span class="hlt">Taguchi</span> coupled Artificial Neural Network (ANN) technique is adopted to arrive at optimal values of the input parameters such as load, reinforcement weight percentage, sliding distance and sliding velocity and thereby achieve minimal target output value viz. specific wear rate. Influence of any of the input parameter over specific wear rate studied through ANOVA reveals that load acting on pin has a major influence with 38.85% followed by r-GO wt. % of 25.82%. ANN model developed to predict specific wear rate value based on the variation of input parameter facilitates better predictability with R-value of 98.4% when compared with the outcomes of regression model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApSS..422..787M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApSS..422..787M"><span>Fabrication of flower-like micro/nano dual scale structured copper oxide surfaces: Optimization of self-cleaning properties via <span class="hlt">Taguchi</span> design</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moosavi, Saeideh Sadat; Norouzbeigi, Reza; Velayi, Elmira</p> <p>2017-11-01</p> <p>In the present work, copper oxide superhydrophobic surface is fabricated on a copper foil via the chemical bath deposition (CBD) <span class="hlt">method</span>. The effects of some influential factors such as initial concentrations of Cu (II) ions and the surface energy modifier, solution pH, reaction and modification steps time on the wettability property of copper oxide surface were evaluated using <span class="hlt">Taguchi</span> L16 experimental design. Results showed that the initial concentration of Cu (II) has the most significant impact on the water contact angle and wettability characteristics. The XRD, SEM, AFM and FTIR analyses were used to characterize the copper oxide surfaces. The Water contact angle (WCA) and contact angle hysteresis (CAH) were also measured. The SEM results indicated the formation of a flower-like micro/nano dual-scale structure of copper oxide on the substrate. This structure composed of numerous nano-petals with a thickness of about 50 nm. As a result, a copper oxide hierarchical surface with WCA of 168.4°± 3.5° and CAH of 2.73° exhibited the best superhydrophobicity under proposed optimum condition. This result has been obtained just by 10 min hydrolysis reaction. Besides, this surface showed a good stability under acidic and saline conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997AIPC..387.1151Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997AIPC..387.1151Y"><span>Aircraft operability <span class="hlt">methods</span> <span class="hlt">applied</span> to space launch vehicles</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Young, Douglas</p> <p>1997-01-01</p> <p>The commercial space launch market requirement for low vehicle operations costs necessitates the application of <span class="hlt">methods</span> and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is <span class="hlt">applied</span> extensively in the aircraft industry, has yet to be <span class="hlt">applied</span> to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design <span class="hlt">methods</span> derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these <span class="hlt">methods</span> will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5054730','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5054730"><span>The crowding factor <span class="hlt">method</span> <span class="hlt">applied</span> to parafoveal vision</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ghahghaei, Saeideh; Walker, Laura</p> <p>2016-01-01</p> <p>Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor <span class="hlt">method</span> avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This <span class="hlt">method</span> was developed and <span class="hlt">applied</span> in the periphery (Petrov & Meleshkevich, 2011b). In this work, we <span class="hlt">apply</span> the <span class="hlt">method</span> to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor <span class="hlt">method</span> provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptLT..89..214M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptLT..89..214M"><span>Determination of laser cutting process conditions using the preference selection index <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan</p> <p>2017-03-01</p> <p>Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) <span class="hlt">method</span> for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI <span class="hlt">method</span> is that it represents an almost unexplored multi-criteria decision making (MCDM) <span class="hlt">method</span>, and moreover, this <span class="hlt">method</span> does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI <span class="hlt">method</span> was explained in detail. Experiment realization was conducted by using <span class="hlt">Taguchi</span>'s L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while <span class="hlt">applying</span> the PSI <span class="hlt">method</span> it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3589721','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3589721"><span>Laccase production by Coriolopsis caperata RCK2011: Optimization under solid state fermentation by <span class="hlt">Taguchi</span> DOE methodology</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Nandal, Preeti; Ravella, Sreenivas Rao; Kuhad, Ramesh Chander</p> <p>2013-01-01</p> <p>Laccase production by Coriolopsis caperata RCK2011 under solid state fermentation was optimized following <span class="hlt">Taguchi</span> design of experiment. An orthogonal array layout of L18 (21 × 37) was constructed using Qualitek-4 software with eight most influensive factors on laccase production. At individual level pH contributed higher influence, whereas, corn steep liquor (CSL) accounted for more than 50% of the severity index with biotin and KH2PO4 at the interactive level. The optimum conditions derived were; temperature 30°C, pH 5.0, wheat bran 5.0 g, inoculum size 0.5 ml (fungal cell mass = 0.015 g dry wt.), biotin 0.5% w/v, KH2PO4 0.013% w/v, CSL 0.1% v/v and 0.5 mM xylidine as an inducer. The validation experiments using optimized conditions confirmed an improvement in enzyme production by 58.01%. The laccase production to the level of 1623.55 Ugds−1 indicates that the fungus C. caperata RCK2011 has the commercial potential for laccase. PMID:23463372</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3960012','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3960012"><span><span class="hlt">Applying</span> Quantitative Genetic <span class="hlt">Methods</span> to Primate Social Behavior</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Brent, Lauren J. N.</p> <p>2013-01-01</p> <p>Increasingly, behavioral ecologists have <span class="hlt">applied</span> quantitative genetic <span class="hlt">methods</span> to investigate the evolution of behaviors in wild animal populations. The promise of quantitative genetics in unmanaged populations opens the door for simultaneous analysis of inheritance, phenotypic plasticity, and patterns of selection on behavioral phenotypes all within the same study. In this article, we describe how quantitative genetic techniques provide studies of the evolution of behavior with information that is unique and valuable. We outline technical obstacles for <span class="hlt">applying</span> quantitative genetic techniques that are of particular relevance to studies of behavior in primates, especially those living in noncaptive populations, e.g., the need for pedigree information, non-Gaussian phenotypes, and demonstrate how many of these barriers are now surmountable. We illustrate this by <span class="hlt">applying</span> recent quantitative genetic <span class="hlt">methods</span> to spatial proximity data, a simple and widely collected primate social behavior, from adult rhesus macaques on Cayo Santiago. Our analysis shows that proximity measures are consistent across repeated measurements on individuals (repeatable) and that kin have similar mean measurements (heritable). Quantitative genetics may hold lessons of considerable importance for studies of primate behavior, even those without a specific genetic focus. PMID:24659839</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/4284353','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/biblio/4284353"><span>PLURAL METALLIC COATINGS ON URANIUM AND <span class="hlt">METHOD</span> OF <span class="hlt">APPLYING</span> SAME</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Gray, A.G.</p> <p>1958-09-16</p> <p>A <span class="hlt">method</span> is described of <span class="hlt">applying</span> protective coatings to uranlum articles. It consists in <span class="hlt">applying</span> chromium plating to such uranium articles by electrolysis in a chromic acid bath and subsequently <span class="hlt">applying</span>, to this minum containing alloy. This aluminum contalning alloy (for example one of aluminum and silicon) may then be used as a bonding alloy between the chromized surface and an aluminum can.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1178736','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1178736"><span>Near-infrared radiation curable multilayer coating systems and <span class="hlt">methods</span> for <span class="hlt">applying</span> same</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bowman, Mark P; Verdun, Shelley D; Post, Gordon L</p> <p>2015-04-28</p> <p>Multilayer coating systems, <span class="hlt">methods</span> of <span class="hlt">applying</span> and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. <span class="hlt">Methods</span> of <span class="hlt">applying</span> a multilayer coating composition to a substrate may comprise <span class="hlt">applying</span> a first coating comprising a near-IR absorber, <span class="hlt">applying</span> a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16164750','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16164750"><span>Cluster detection <span class="hlt">methods</span> <span class="hlt">applied</span> to the Upper Cape Cod cancer data.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann</p> <p>2005-09-15</p> <p>A variety of statistical <span class="hlt">methods</span> have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different <span class="hlt">methods</span>. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three <span class="hlt">methods</span> currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) <span class="hlt">method</span> as <span class="hlt">applied</span> by Webster; and Kulldorff's spatial scan statistic. We <span class="hlt">apply</span> these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three <span class="hlt">methods</span> generally concur. However, for 15 year latency and no latency assumptions, the <span class="hlt">methods</span> produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical <span class="hlt">methods</span> provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical <span class="hlt">methods</span> <span class="hlt">applied</span> to epidemiological data with a spatial component.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29594783','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29594783"><span>Enhanced Molecular Dynamics <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Drug Design Projects.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal</p> <p>2018-01-01</p> <p>Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are <span class="hlt">applied</span> in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be <span class="hlt">applied</span> to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics <span class="hlt">methods</span> to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics <span class="hlt">methods</span>, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29045443','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29045443"><span>A Lagrangian meshfree <span class="hlt">method</span> <span class="hlt">applied</span> to linear and nonlinear elasticity.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walker, Wade A</p> <p>2017-01-01</p> <p>The repeated replacement <span class="hlt">method</span> (RRM) is a Lagrangian meshfree <span class="hlt">method</span> which we have previously <span class="hlt">applied</span> to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we <span class="hlt">apply</span> the enhanced <span class="hlt">method</span> to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical <span class="hlt">methods</span> such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other <span class="hlt">methods</span> to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5646830','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5646830"><span>A Lagrangian meshfree <span class="hlt">method</span> <span class="hlt">applied</span> to linear and nonlinear elasticity</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2017-01-01</p> <p>The repeated replacement <span class="hlt">method</span> (RRM) is a Lagrangian meshfree <span class="hlt">method</span> which we have previously <span class="hlt">applied</span> to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we <span class="hlt">apply</span> the enhanced <span class="hlt">method</span> to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical <span class="hlt">methods</span> such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other <span class="hlt">methods</span> to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28416879','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28416879"><span>Supercritical CO2 extraction of candlenut oil: process optimization using <span class="hlt">Taguchi</span> orthogonal array and physicochemical properties of the oil.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Subroto, Erna; Widjojokusumo, Edward; Veriansyah, Bambang; Tjandrawinata, Raymond R</p> <p>2017-04-01</p> <p>A series of experiments was conducted to determine optimum conditions for supercritical carbon dioxide extraction of candlenut oil. A <span class="hlt">Taguchi</span> experimental design with L 9 orthogonal array (four factors in three levels) was employed to evaluate the effects of pressure of 25-35 MPa, temperature of 40-60 °C, CO 2 flow rate of 10-20 g/min and particle size of 0.3-0.8 mm on oil solubility. The obtained results showed that increase in particle size, pressure and temperature improved the oil solubility. The supercritical carbon dioxide extraction at optimized parameters resulted in oil yield extraction of 61.4% at solubility of 9.6 g oil/kg CO 2 . The obtained candlenut oil from supercritical carbon dioxide extraction has better oil quality than oil which was extracted by Soxhlet extraction using n-hexane. The oil contains high unsaturated oil (linoleic acid and linolenic acid), which have many beneficial effects on human health.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJTJE..33..275R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJTJE..33..275R"><span><span class="hlt">Taguchi</span> Based Regression Analysis of End-Wall Film Cooling in a Gas Turbine Cascade with Single Row of Holes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ravi, D.; Parammasivam, K. M.</p> <p>2016-09-01</p> <p>Numerical investigations were conducted on a turbine cascade, with end-wall cooling by a single row of cylindrical holes, inclined at 30°. The mainstream fluid was hot air and the coolant was CO2 gas. Based on the Reynolds number, the flow was turbulent at the inlet. The film hole row position, its pitch and blowing ratio was varied with five different values. <span class="hlt">Taguchi</span> approach was used in designing a L25 orthogonal array (OA) for these parameters. The end-wall averaged film cooling effectiveness (bar η) was chosen as the quality characteristic. CFD analyses were carried out using Ansys Fluent on computational domains designed with inputs from OA. Experiments were conducted for one chosen OA configuration and the computational results were found to correlate well with experimental measurements. The responses from the CFD analyses were fed to the statistical tool to develop a correlation for bar η using regression analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20806254','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20806254"><span>Optimization of laccase production by Pleurotus ostreatus IMI 395545 using the <span class="hlt">Taguchi</span> DOE methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Periasamy, Rathinasamy; Palvannan, Thayumanavan</p> <p>2010-12-01</p> <p>Production of laccase using a submerged culture of Pleurotus orstreatus IMI 395545 was optimized by the <span class="hlt">Taguchi</span> orthogonal array (OA) design of experiments (DOE) methodology. This approach facilitates the study of the interactions of a large number of variables spanned by factors and their settings, with a small number of experiments, leading to considerable savings in time and cost for process optimization. This methodology optimizes the number of impact factors and enables to calculate their interaction in the production of industrial enzymes. Eight factors, viz. glucose, yeast extract, malt extract, inoculum, mineral solution, inducer (1 mM CuSO₄) and amino acid (l-asparagine) at three levels and pH at two levels, with an OA layout of L18 (2¹ × 3⁷) were selected for the proposed experimental design. The laccase yield obtained from the 18 sets of fermentation experiments performed with the selected factors and levels was further processed with Qualitek-4 software. The optimized conditions shared an enhanced laccase expression of 86.8% (from 485.0 to 906.3 U). The combination of factors was further validated for laccase production and reactive blue 221 decolorization. The results revealed an enhanced laccase yield of 32.6% and dye decolorization up to 84.6%. This methodology allows the complete evaluation of main and interaction factors. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29291582','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29291582"><span>Bioremediation of chlorpyrifos contaminated soil by two phase bioslurry reactor: Processes evaluation and optimization by <span class="hlt">Taguchi</span>'s design of experimental (DOE) methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pant, Apourv; Rai, J P N</p> <p>2018-04-15</p> <p>Two phase bioreactor was constructed, designed and developed to evaluate the chlorpyrifos remediation. Six biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature and soil micro flora load) were evaluated by design of experimental (DOE) methodology employing <span class="hlt">Taguchi</span>'s orthogonal array (OA). The selected six factors were considered at two levels L-8 array (2^7, 15 experiments) in the experimental design. The optimum operating conditions obtained from the methodology showed enhanced chlorpyrifos degradation from 283.86µg/g to 955.364µg/g by overall 70.34% of enhancement. In the present study, with the help of few well defined experimental parameters a mathematical model was constructed to understand the complex bioremediation process and optimize the approximate parameters upto great accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H13A1480H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H13A1480H"><span>Advancing MODFLOW <span class="hlt">Applying</span> the Derived Vector Space <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.</p> <p>2015-12-01</p> <p>The most effective domain decomposition <span class="hlt">methods</span> (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization <span class="hlt">method</span> that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization <span class="hlt">method</span> is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of <span class="hlt">applying</span> the DVS <span class="hlt">method</span> to a widely used simulator for the first time, here we present the advances of the application of this <span class="hlt">method</span> for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization <span class="hlt">methods</span> for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively <span class="hlt">Applying</span> Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12861612','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12861612"><span>Analytical <span class="hlt">method</span> for promoting process capability of shock absorption steel.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan</p> <p>2003-01-01</p> <p>Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation <span class="hlt">methods</span> failed to measure process yield and process centering; so this paper uses <span class="hlt">Taguchi</span> loss function as basis to establish an evaluation <span class="hlt">method</span> and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this <span class="hlt">method</span> can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11686277','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11686277"><span>Linear algebraic <span class="hlt">methods</span> <span class="hlt">applied</span> to intensity modulated radiation therapy.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Crooks, S M; Xing, L</p> <p>2001-10-01</p> <p><span class="hlt">Methods</span> of linear algebra are <span class="hlt">applied</span> to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as <span class="hlt">applied</span> to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA098895','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA098895"><span>An Error Analysis for the Finite Element <span class="hlt">Method</span> <span class="hlt">Applied</span> to Convection Diffusion Problems.</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1981-03-01</p> <p>D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT <span class="hlt">METHOD</span> <span class="hlt">APPLIED</span> TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT <span class="hlt">METHOD</span> P. - 0 w <span class="hlt">APPLIED</span> TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT <span class="hlt">METHOD</span> <span class="hlt">APPLIED</span> TO CONV..ETC (U</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JIEIC.tmp...12S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JIEIC.tmp...12S"><span>Investigation and <span class="hlt">Taguchi</span> Optimization of Microbial Fuel Cell Salt Bridge Dimensional Parameters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sarma, Dhrupad; Barua, Parimal Bakul; Dey, Nabendu; Nath, Sumitro; Thakuria, Mrinmay; Mallick, Synthia</p> <p>2018-01-01</p> <p>One major problem of two chamber salt bridge microbial fuel cells (MFCs) is the high resistance offered by the salt bridge to anion flow. Many researchers who have studied and optimized various parameters related to salt bridge MFC, have not shed much light on the effect of salt bridge dimensional parameters on the MFC performance. Therefore, the main objective of this research is to investigate the effect of length and cross sectional area of salt bridge and the effect of solar radiation and atmospheric temperature on MFC current output. An experiment has been designed using <span class="hlt">Taguchi</span> L9 orthogonal array, taking length and cross sectional area of salt bridge as factors having three levels. Nine MFCs were fabricated as per the nine trial conditions. Trials were conducted for 3 days and output current of each of the MFCs along with solar insolation and atmospheric temperature were recorded. Analysis of variance shows that salt bridge length has significant effect both on mean (with 53.90% contribution at 95% CL) and variance (with 56.46% contribution at 87% CL), whereas the effect of cross sectional area of the salt bridge and the interaction of these two factors is significant on mean only (with 95% CL). Optimum combination was found at 260 mm salt bridge length and 506.7 mm2 cross sectional area with 4.75 mA of mean output current. The temperature and solar insolation data when correlated with each of the MFCs average output current, revealed that both external factors have significant impact on MFC current output but the correlation coefficient varies from MFC to MFC depending on salt bridge dimensional parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..330a2117L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..330a2117L"><span>Analysis of concrete beams using <span class="hlt">applied</span> element <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen</p> <p>2018-03-01</p> <p>The <span class="hlt">Applied</span> Element <span class="hlt">Method</span> (AEM) is a displacement based <span class="hlt">method</span> of structural analysis. Some of its features are similar to that of Finite Element <span class="hlt">Method</span> (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4385678','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4385678"><span>Accurate Simulation of MPPT <span class="hlt">Methods</span> Performance When <span class="hlt">Applied</span> to Commercial Photovoltaic Panels</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2015-01-01</p> <p>A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The <span class="hlt">method</span> takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT <span class="hlt">methods</span> comparison or their performance prediction when <span class="hlt">applied</span> to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT <span class="hlt">methods</span> <span class="hlt">applied</span> to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25874262','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25874262"><span>Accurate simulation of MPPT <span class="hlt">methods</span> performance when <span class="hlt">applied</span> to commercial photovoltaic panels.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel</p> <p>2015-01-01</p> <p>A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The <span class="hlt">method</span> takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT <span class="hlt">methods</span> comparison or their performance prediction when <span class="hlt">applied</span> to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT <span class="hlt">methods</span> <span class="hlt">applied</span> to a commercial solar panel, within a day, and under realistic ambient conditions.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930004191','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930004191"><span>Weight optimization of an aerobrake structural concept for a lunar transfer vehicle</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bush, Lance B.; Unal, Resit; Rowell, Lawrence F.; Rehder, John J.</p> <p>1992-01-01</p> <p>An aerobrake structural concept for a lunar transfer vehicle was weight optimized through the use of the <span class="hlt">Taguchi</span> design <span class="hlt">method</span>, finite element analyses, and element sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter-depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The aerobrake structural configuration with the minimum weight was 44 percent less than the average weight of all the remaining satisfactory experimental configurations. In addition, the results of this study have served to bolster the advocacy of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for aerospace vehicle design. Both reduced analysis time and an optimized design demonstrated the applicability of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> to aerospace vehicle design.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28431349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28431349"><span>Impacts of environmental factors on arsenate biotransformation and release in Microcystis aeruginosa using the <span class="hlt">Taguchi</span> experimental design approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Zhenhong; Luo, Zhuanxi; Yan, Changzhou; Xing, Baoshan</p> <p>2017-07-01</p> <p>Very limited information is available on how and to what extent environmental factors influence arsenic (As) biotransformation and release in freshwater algae. These factors include concentrations of arsenate (As(V)), dissolved inorganic nitrogen (N), phosphate (P), and ambient pH. This study conducted a series of experiments using <span class="hlt">Taguchi</span> <span class="hlt">methods</span> to determine optimum conditions for As biotransformation. We assessed principal effective factors of As(V), N, P, and pH and determined that As biotransformation and release actuate at 10.0 μM As(V) in dead alga cells, the As efflux ratio and organic As efflux content actuate at 1.0 mg/L P, algal growth and intracellular arsenite (As(III)) content actuate at 10.0 mg/L N, and the total sum of As(III) efflux from dead alga cells actuates at a pH level of 10. Moreover, N is the critical component for As(V) biotransformation in M. aeruginosa, specifically for As(III) transformation, because N can accelerate algal growth, subsequently improving As(III) accumulation and its efflux, which results in an As(V) to As(III) reduction. Furthermore, low P concentrations in combination with high N concentrations promote As accumulation. Following As(V), P was the primary impacting factor for As accumulation. In addition, small amounts of As accumulation under low concentrations of As and high P were securely stored in living algal cells and were easily released after cell death. Results from this study will help to assess practical applications and the overall control of key environmental factors, particularly those associated with algal bioremediation in As polluted water. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..319a2035H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..319a2035H"><span>Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using <span class="hlt">Taguchi</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir</p> <p>2018-03-01</p> <p>Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using <span class="hlt">Taguchi</span> analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25373790','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25373790"><span>Experimental design <span class="hlt">methods</span> for bioengineering applications.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri</p> <p>2016-01-01</p> <p>Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design <span class="hlt">methods</span> that can be used to investigate various factors relating to bioengineering processes. The experimental <span class="hlt">methods</span> generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, <span class="hlt">Taguchi</span> design, Box-Behnken design and central composite design. These design <span class="hlt">methods</span> are briefly introduced, and then the application of these design <span class="hlt">methods</span> to study different bioengineering processes is analyzed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19780007837','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19780007837"><span>The <span class="hlt">method</span> of averages <span class="hlt">applied</span> to the KS differential equations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Graf, O. F., Jr.; Mueller, A. C.; Starke, S. E.</p> <p>1977-01-01</p> <p>A new approach for the solution of artificial satellite trajectory problems is proposed. The basic idea is to <span class="hlt">apply</span> an analytical solution <span class="hlt">method</span> (the <span class="hlt">method</span> of averages) to an appropriate formulation of the orbital mechanics equations of motion (the KS-element differential equations). The result is a set of transformed equations of motion that are more amenable to numerical solution.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..295a2011Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..295a2011Z"><span>Experimental Research and Mathematical Modeling of Parameters Effecting on Cutting Force and SurfaceRoughness in CNC Turning Process</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zeqiri, F.; Alkan, M.; Kaya, B.; Toros, S.</p> <p>2018-01-01</p> <p>In this paper, the effects of cutting parameters on cutting forces and surface roughness based on <span class="hlt">Taguchi</span> experimental design <span class="hlt">method</span> are determined. <span class="hlt">Taguchi</span> L9 orthogonal array is used to investigate the effects of machining parameters. Optimal cutting conditions are determined using the signal/noise (S/N) ratio which is calculated by average surface roughness and cutting force. Using results of analysis, effects of parameters on both average surface roughness and cutting forces are calculated on Minitab 17 using ANOVA <span class="hlt">method</span>. The material that was investigated is Inconel 625 steel for two cases with heat treatment and without heat treatment. The predicted and calculated values with measurement are very close to each other. Confirmation test of results showed that the <span class="hlt">Taguchi</span> <span class="hlt">method</span> was very successful in the optimization of machining parameters for maximum surface roughness and cutting forces in the CNC turning process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030003828','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030003828"><span>Probabilistic <span class="hlt">Methods</span> for Uncertainty Propagation <span class="hlt">Applied</span> to Aircraft Design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.</p> <p>2002-01-01</p> <p>Three <span class="hlt">methods</span> of probabilistic uncertainty propagation and quantification (the <span class="hlt">method</span> of moments, Monte Carlo simulation, and a nongradient simulation search <span class="hlt">method</span>) are <span class="hlt">applied</span> to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular <span class="hlt">methods</span> of uncertainty propagation and quantification. However, specific implementation features of the first and third <span class="hlt">methods</span> chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=quantitative+AND+research&pg=3&id=EJ970705','ERIC'); return false;" href="https://eric.ed.gov/?q=quantitative+AND+research&pg=3&id=EJ970705"><span><span class="hlt">Applying</span> Mixed <span class="hlt">Methods</span> Research at the Synthesis Level: An Overview</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Heyvaert, Mieke; Maes, Bea; Onghena, Patrick</p> <p>2011-01-01</p> <p>Historically, qualitative and quantitative approaches have been <span class="hlt">applied</span> relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed <span class="hlt">methods</span> approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22751850','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22751850"><span>Neutralization of red mud with pickling waste liquor using <span class="hlt">Taguchi</span>'s design of experimental methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rai, Suchita; Wasewar, Kailas L; Lataye, Dilip H; Mishra, Rajshekhar S; Puttewar, Suresh P; Chaddha, Mukesh J; Mahindiran, P; Mukhopadhyay, Jyoti</p> <p>2012-09-01</p> <p>'Red mud' or 'bauxite residue', a waste generated from alumina refinery is highly alkaline in nature with a pH of 10.5-12.5. Red mud poses serious environmental problems such as alkali seepage in ground water and alkaline dust generation. One of the options to make red mud less hazardous and environmentally benign is its neutralization with acid or an acidic waste. Hence, in the present study, neutralization of alkaline red mud was carried out using a highly acidic waste (pickling waste liquor). Pickling waste liquor is a mixture of strong acids used for descaling or cleaning the surfaces in steel making industry. The aim of the study was to look into the feasibility of neutralization process of the two wastes using <span class="hlt">Taguchi</span>'s design of experimental methodology. This would make both the wastes less hazardous and safe for disposal. The effect of slurry solids, volume of pickling liquor, stirring time and temperature on the neutralization process were investigated. The analysis of variance (ANOVA) shows that the volume of the pickling liquor is the most significant parameter followed by quantity of red mud with 69.18% and 18.48% contribution each respectively. Under the optimized parameters, pH value of 7 can be achieved by mixing the two wastes. About 25-30% of the total soda from the red mud is being neutralized and alkalinity is getting reduced by 80-85%. Mineralogy and morphology of the neutralized red mud have also been studied. The data presented will be useful in view of environmental concern of red mud disposal.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19163288','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19163288"><span>The equivalent magnetizing <span class="hlt">method</span> <span class="hlt">applied</span> to the design of gradient coils for MRI.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart</p> <p>2008-01-01</p> <p>This paper presents a new <span class="hlt">method</span> for the design of gradient coils for Magnetic Resonance Imaging systems. The <span class="hlt">method</span> is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This <span class="hlt">method</span> can be <span class="hlt">applied</span> for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional <span class="hlt">methods</span>. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current <span class="hlt">method</span> produces a superior gradient coil performance than coils designed by <span class="hlt">applying</span> conventional <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=Journal+AND+Applied+AND+Physiology&id=EJ950702','ERIC'); return false;" href="https://eric.ed.gov/?q=Journal+AND+Applied+AND+Physiology&id=EJ950702"><span>An Aural Learning Project: Assimilating Jazz Education <span class="hlt">Methods</span> for Traditional <span class="hlt">Applied</span> Pedagogy</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gamso, Nancy M.</p> <p>2011-01-01</p> <p>The Aural Learning Project (ALP) was developed to incorporate jazz <span class="hlt">method</span> components into the author's classical practice and her <span class="hlt">applied</span> woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical <span class="hlt">applied</span> curriculum. The components of the…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..362a2027J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..362a2027J"><span>Experimental Study in <span class="hlt">Taguchi</span> <span class="hlt">Method</span> on Surface Quality Predication of HSM</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ji, Yan; Li, Yueen</p> <p>2018-05-01</p> <p>Based on the study of ball milling mechanism and machining surface formation mechanism, the formation of high speed ball-end milling surface is a time-varying and cumulative Thermos-mechanical coupling process. The nature of this problem is that the uneven stress field and temperature field affect the machined surface Process, the performance of the processing parameters in the processing interaction in the elastic-plastic materials produced by the elastic recovery and plastic deformation. The surface quality of machining surface is characterized by multivariable nonlinear system. It is still an indispensable and effective <span class="hlt">method</span> to study the surface quality of high speed ball milling by experiments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4202981','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4202981"><span>The Role of <span class="hlt">Applied</span> Epidemiology <span class="hlt">Methods</span> in the Disaster Management Cycle</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Heumann, Michael; Perrotta, Dennis; Wolkin, Amy F.; Schnall, Amy H.; Podgornik, Michelle N.; Cruz, Miguel A.; Horney, Jennifer A.; Zane, David; Roisman, Rachel; Greenspan, Joel R.; Thoroughman, Doug; Anderson, Henry A.; Wells, Eden V.; Simms, Erin F.</p> <p>2014-01-01</p> <p>Disaster epidemiology (i.e., <span class="hlt">applied</span> epidemiology in disaster settings) presents a source of reliable and actionable information for decision-makers and stakeholders in the disaster management cycle. However, epidemiological <span class="hlt">methods</span> have yet to be routinely integrated into disaster response and fully communicated to response leaders. We present a framework consisting of rapid needs assessments, health surveillance, tracking and registries, and epidemiological investigations, including risk factor and health outcome studies and evaluation of interventions, which can be practiced throughout the cycle. <span class="hlt">Applying</span> each <span class="hlt">method</span> can result in actionable information for planners and decision-makers responsible for preparedness, response, and recovery. Disaster epidemiology, once integrated into the disaster management cycle, can provide the evidence base to inform and enhance response capability within the public health infrastructure. PMID:25211748</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=congruence&pg=7&id=EJ905374','ERIC'); return false;" href="https://eric.ed.gov/?q=congruence&pg=7&id=EJ905374"><span>Further Insight and Additional Inference <span class="hlt">Methods</span> for Polynomial Regression <span class="hlt">Applied</span> to the Analysis of Congruence</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti</p> <p>2010-01-01</p> <p>In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to <span class="hlt">applying</span> difference score in the study of congruence. Although this <span class="hlt">method</span> is increasingly <span class="hlt">applied</span> in congruence research, its complexity relative to other <span class="hlt">methods</span> for assessing congruence (e.g., difference score <span class="hlt">methods</span>) was one of the…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014BGeo...11.2721H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014BGeo...11.2721H"><span>Non-invasive imaging <span class="hlt">methods</span> <span class="hlt">applied</span> to neo- and paleo-ontological cephalopod research</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.</p> <p>2014-05-01</p> <p>Several non-invasive <span class="hlt">methods</span> are common practice in natural sciences today. Here we present how they can be <span class="hlt">applied</span> and contribute to current topics in cephalopod (paleo-) biology. Different <span class="hlt">methods</span> will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the <span class="hlt">methods</span> is seen in morphometry and volumetry of cephalopod shells. In particular we present a <span class="hlt">method</span> for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were <span class="hlt">applied</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4897259','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4897259"><span>Multiresponse Optimization of Process Parameters in Turning of GFRP Using TOPSIS <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Parida, Arun Kumar; Routara, Bharat Chandra</p> <p>2014-01-01</p> <p><span class="hlt">Taguchi</span>'s design of experiment is utilized to optimize the process parameters in turning operation with dry environment. Three parameters, cutting speed (v), feed (f), and depth of cut (d), with three different levels are taken for the responses like material removal rate (MRR) and surface roughness (R a). The machining is conducted with <span class="hlt">Taguchi</span> L9 orthogonal array, and based on the S/N analysis, the optimal process parameters for surface roughness and MRR are calculated separately. Considering the larger-the-better approach, optimal process parameters for material removal rate are cutting speed at level 3, feed at level 2, and depth of cut at level 3, that is, v 3-f 2-d 3. Similarly for surface roughness, considering smaller-the-better approach, the optimal process parameters are cutting speed at level 1, feed at level 1, and depth of cut at level 3, that is, v 1-f 1-d 3. Results of the main effects plot indicate that depth of cut is the most influencing parameter for MRR but cutting speed is the most influencing parameter for surface roughness and feed is found to be the least influencing parameter for both the responses. The confirmation test is conducted for both MRR and surface roughness separately. Finally, an attempt has been made to optimize the multiresponses using technique for order preference by similarity to ideal solution (TOPSIS) with <span class="hlt">Taguchi</span> approach. PMID:27437503</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..225a2186S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..225a2186S"><span><span class="hlt">Taguchi</span> Optimization of Cutting Parameters in Turning AISI 1020 MS with M2 HSS Tool</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sonowal, Dharindom; Sarma, Dhrupad; Bakul Barua, Parimal; Nath, Thuleswar</p> <p>2017-08-01</p> <p>In this paper the effect of three cutting parameters viz. Spindle speed, Feed and Depth of Cut on surface roughness of AISI 1020 mild steel bar in turning was investigated and optimized to obtain minimum surface roughness. All the experiments are conducted on HMT LB25 lathe machine using M2 HSS cutting tool. Ranges of parameters of interest have been decided through some preliminary experimentation (One Factor At a Time experiments). Finally a combined experiment has been carried out using Taguchi’s L27 Orthogonal Array (OA) to study the main effect and interaction effect of the all three parameters. The experimental results were analyzed with raw data ANOVA (Analysis of Variance) and S/N data (Signal to Noise ratio) ANOVA. Results show that Spindle speed, Feed and Depth of Cut have significant effects on both mean and variation of surface roughness in turning AISI 1020 mild steel. Mild two factors interactions are observed among the aforesaid factors with significant effects only on the mean of the output variable. From the <span class="hlt">Taguchi</span> parameter optimization the optimum factor combination is found to be 630 rpm spindle speed, 0.05 mm/rev feed and 1.25 mm depth of cut with estimated surface roughness 2.358 ± 0.970 µm. A confirmatory experiment was conducted with the optimum factor combination to verify the results. In the confirmatory experiment the average value of surface roughness is found to be 2.408 µm which is well within the range (0.418 µm to 4.299 µm) predicted for confirmatory experiment.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3268514','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3268514"><span><span class="hlt">Applying</span> Propensity Score <span class="hlt">Methods</span> in Medical Research: Pitfalls and Prospects</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Luo, Zhehui; Gardiner, Joseph C.; Bradley, Cathy J.</p> <p>2012-01-01</p> <p>The authors review experimental and nonexperimental causal inference <span class="hlt">methods</span>, focusing on assumptions for the validity of instrumental variables and propensity score (PS) <span class="hlt">methods</span>. They provide guidance in four areas for the analysis and reporting of PS <span class="hlt">methods</span> in medical research and selectively evaluate mainstream medical journal articles from 2000 to 2005 in the four areas, namely, examination of balance, overlapping support description, use of estimated PS for evaluation of treatment effect, and sensitivity analyses. In spite of the many pitfalls, when appropriately evaluated and <span class="hlt">applied</span>, PS <span class="hlt">methods</span> can be powerful tools in assessing average treatment effects in observational studies. Appropriate PS applications can create experimental conditions using observational data when randomized controlled trials are not feasible and, thus, lead researchers to an efficient estimator of the average treatment effect. PMID:20442340</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/963875','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/963875"><span><span class="hlt">Method</span> of <span class="hlt">applying</span> a cerium diffusion coating to a metallic alloy</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Jablonski, Paul D [Salem, OR; Alman, David E [Benton, OR</p> <p>2009-06-30</p> <p>A <span class="hlt">method</span> of <span class="hlt">applying</span> a cerium diffusion coating to a preferred nickel base alloy substrate has been discovered. A cerium oxide paste containing a halide activator is <span class="hlt">applied</span> to the polished substrate and then dried. The workpiece is heated in a non-oxidizing atmosphere to diffuse cerium into the substrate. After cooling, any remaining cerium oxide is removed. The resulting cerium diffusion coating on the nickel base substrate demonstrates improved resistance to oxidation. Cerium coated alloys are particularly useful as components in a solid oxide fuel cell (SOFC).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..330a2128L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..330a2128L"><span>Analysis of Brick Masonry Wall using <span class="hlt">Applied</span> Element <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen</p> <p>2018-03-01</p> <p>The <span class="hlt">Applied</span> Element <span class="hlt">Method</span> (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element <span class="hlt">Method</span> (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PASJ...68..100M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PASJ...68..100M"><span>Matched-filtering line search <span class="hlt">methods</span> <span class="hlt">applied</span> to Suzaku data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyazaki, Naoto; Yamada, Shin'ya; Enoto, Teruaki; Axelsson, Magnus; Ohashi, Takaya</p> <p>2016-12-01</p> <p>A detailed search for emission and absorption lines and an assessment of their upper limits are performed for Suzaku data. The <span class="hlt">method</span> utilizes a matched-filtering approach to maximize the signal-to-noise ratio for a given energy resolution, which could be applicable to many types of line search. We first <span class="hlt">applied</span> it to well-known active galactic nuclei spectra that have been reported to have ultra-fast outflows, and find that our results are consistent with previous findings at the ˜3σ level. We proceeded to search for emission and absorption features in two bright magnetars 4U 0142+61 and 1RXS J1708-4009, <span class="hlt">applying</span> the filtering <span class="hlt">method</span> to Suzaku data. We found that neither source showed any significant indication of line features, even using long-term Suzaku observations or dividing their spectra into spin phases. The upper limits on the equivalent width of emission/absorption lines are constrained to be a few eV at ˜1 keV and a few hundreds of eV at ˜10 keV. This strengthens previous reports that persistently bright magnetars do not show proton cyclotron absorption features in soft X-rays and, even if they exist, they would be broadened or much weaker than below the detection limit of X-ray CCD.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1075726.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1075726.pdf"><span>An Empirical Study of <span class="hlt">Applying</span> Associative <span class="hlt">Method</span> in College English Vocabulary Learning</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Zhang, Min</p> <p>2014-01-01</p> <p>Vocabulary is the basis of any language learning. To many Chinese non-English majors it is difficult to memorize English words. This paper <span class="hlt">applied</span> associative <span class="hlt">method</span> in presenting new words to them. It is found that associative <span class="hlt">method</span> did receive a better result both in short-term and long-term retention of English words. Compared with the…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29797204','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29797204"><span>Experimental analysis of performance and emission on DI diesel engine fueled with diesel-palm kernel methyl ester-triacetin blends: a <span class="hlt">Taguchi</span> fuzzy-based optimization.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Panda, Jibitesh Kumar; Sastry, Gadepalli Ravi Kiran; Rai, Ram Naresh</p> <p>2018-05-25</p> <p>The energy situation and the concerns about global warming nowadays have ignited research interest in non-conventional and alternative fuel resources to decrease the emission and the continuous dependency on fossil fuels, particularly for various sectors like power generation, transportation, and agriculture. In the present work, the research is focused on evaluating the performance, emission characteristics, and combustion of biodiesel such as palm kernel methyl ester with the addition of diesel additive "triacetin" in it. A timed manifold injection (TMI) system was taken up to examine the influence of durations of several blends induced on the emission and performance characteristics as compared to normal diesel operation. This experimental study shows better performance and releases less emission as compared with mineral diesel and in turn, indicates that high performance and low emission is promising in PKME-triacetin fuel operation. This analysis also attempts to describe the application of the fuzzy logic-based <span class="hlt">Taguchi</span> analysis to optimize the emission and performance parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013BGD....1018803H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013BGD....1018803H"><span>Non-invasive imaging <span class="hlt">methods</span> <span class="hlt">applied</span> to neo- and paleontological cephalopod research</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.</p> <p>2013-11-01</p> <p>Several non-invasive <span class="hlt">methods</span> are common practice in natural sciences today. Here we present how they can be <span class="hlt">applied</span> and contribute to current topics in cephalopod (paleo-) biology. Different <span class="hlt">methods</span> will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the <span class="hlt">methods</span> is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/956520','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/956520"><span>An <span class="hlt">applied</span> study using systems engineering <span class="hlt">methods</span> to prioritize green systems options</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lee, Sonya M; Macdonald, John M</p> <p>2009-01-01</p> <p>For many years, there have been questions about the effectiveness of <span class="hlt">applying</span> different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering <span class="hlt">methods</span> can be used to help people choose and prioritize technologies that fit within their project and budget. Several <span class="hlt">methods</span> are used to gain perspective intomore » how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects <span class="hlt">applied</span> these <span class="hlt">methods</span> to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21622636','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21622636"><span>A quantitative <span class="hlt">method</span> for measuring forces <span class="hlt">applied</span> by nail braces.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Erdogan, Fatma G</p> <p>2011-01-01</p> <p>Nail bracing is a conservative <span class="hlt">method</span> used for ingrown nails; however, lack of objective measurements limits its use for various nails. Double-string nail braces with extra metal springs were <span class="hlt">applied</span> to 12 patients with 21 chronic, thick, and overcurved ingrown nails. Force was measured with a force gauge meter. Treatment was stopped once patients stood on their tiptoes and walked in shoes pain free without braces. A force gauge meter was also used on a model nail to show the forces <span class="hlt">applied</span> by various nail braces and to compare their pulling forces. After 6 to 10 months of treatment, all of the patients were pain free; 600 to 1,000 centi Newtons of force were <span class="hlt">applied</span> to the nails. As the width of the nail increased, so did the force. Braces exert more force on larger nails, which may shorten treatment durations. By measuring forces, it may be possible to standardize force and duration of treatment according to variables such as nail thickness, nail width, angle of ingrown nail, and duration of symptoms.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940031986','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940031986"><span>Lessons learned <span class="hlt">applying</span> CASE <span class="hlt">methods</span>/tools to Ada software development projects</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Blumberg, Maurice H.; Randall, Richard L.</p> <p>1993-01-01</p> <p>This paper describes the lessons learned from introducing CASE <span class="hlt">methods</span>/tools into organizations and <span class="hlt">applying</span> them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/<span class="hlt">methods</span>. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/<span class="hlt">methods</span>, initial experiences in their introduction and use, and later experiences in the use of specific tools/<span class="hlt">methods</span> and the introduction of new ones.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7698E..0AC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7698E..0AC"><span>Optimization of a chemical identification algorithm</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren</p> <p>2010-04-01</p> <p>A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be <span class="hlt">applied</span>. The limitations of <span class="hlt">applying</span> this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and <span class="hlt">Taguchi</span> techniques. These <span class="hlt">methods</span> require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the <span class="hlt">Taguchi</span> Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29888286','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29888286"><span>Anatomical Thin Titanium Mesh Plate Structural Optimization for Zygomatic-Maxillary Complex Fracture under Fatigue Testing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Yu-Tzu; Huang, Shao-Fu; Fang, Yu-Ting; Huang, Shou-Chieh; Cheng, Hwei-Fang; Chen, Chih-Hao; Wang, Po-Fang; Lin, Chun-Li</p> <p>2018-01-01</p> <p>This study performs a structural optimization of anatomical thin titanium mesh (ATTM) plate and optimal designed ATTM plate fabricated using additive manufacturing (AM) to verify its stabilization under fatigue testing. Finite element (FE) analysis was used to simulate the structural bending resistance of a regular ATTM plate. The <span class="hlt">Taguchi</span> <span class="hlt">method</span> was employed to identify the significance of each design factor in controlling the deflection and determine an optimal combination of designed factors. The optimal designed ATTM plate with patient-matched facial contour was fabricated using AM and <span class="hlt">applied</span> to a ZMC comminuted fracture to evaluate the resting maxillary micromotion/strain under fatigue testing. The <span class="hlt">Taguchi</span> analysis found that the ATTM plate required a designed internal hole distance to be 0.9 mm, internal hole diameter to be 1 mm, plate thickness to be 0.8 mm, and plate height to be 10 mm. The designed plate thickness factor primarily dominated the bending resistance up to 78% importance. The averaged micromotion (displacement) and strain of the maxillary bone showed that ZMC fracture fixation using the miniplate was significantly higher than those using the AM optimal designed ATTM plate. This study concluded that the optimal designed ATTM plate with enough strength to resist the bending effect can be obtained by combining FE and <span class="hlt">Taguchi</span> analyses. The optimal designed ATTM plate with patient-matched facial contour fabricated using AM provides superior stabilization for ZMC comminuted fractured bone segments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28525784','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28525784"><span>Sodium hypochlorite as an alternative to hydrogen peroxide in Fenton process for industrial scale.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Behin, Jamshid; Akbari, Abbas; Mahmoudi, Mohsen; Khajeh, Mehdi</p> <p>2017-09-15</p> <p>In present work, the treatment of aromatic compounds of simulated wastewater was performed by Fenton and NaOCl/Fe 2+ processes. The model solution was prepared based on the wastewater composition of Diisocyanate unit of Karoon Petrochemical Company/Iran containing Diamino-toluenes, Nitro-phenol, Mononitro-toluene, Nitro-cresol, and Dinitro-toluene. Experiments were conducted in a batch mode to examine the effects of operating variables such as pH, oxidant dosages, ferrous ion concentration and numbers of feeding on COD removal. <span class="hlt">Taguchi</span> experimental design was used to determine the optimum conditions. The COD removal efficiency under optimum conditions (suggested by <span class="hlt">Taguchi</span> design) in Fenton and NaOCl/Fe 2+ processes was 88.7% and 83.4%, respectively. The highest contribution factor in Fenton process belongs to pH (47.47%) and in NaOCl/Fe 2+ process belongs to NaOCl/pollutants (50.26%). High regression coefficient (R 2 : 0.98) obtained for <span class="hlt">Taguchi</span> <span class="hlt">method</span>, indicates that models are statistically significant and are in well agreement with each other. The NaOCl/Fe 2+ process utilizing a conventional oxidant, in comparison to hydrogen peroxide, is an efficient cost effective process for COD removal from real wastewater, although the removal efficiency is not as high as in Fenton process; however it is a suitable process to replace Fenton process in industrial scale for wastewater involved aromatic compounds with high COD. This process was successfully <span class="hlt">applied</span> in Karoon Petrochemical Company/Iran. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JJSEE..566.170O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JJSEE..566.170O"><span>Engineering Design Education Program for Graduate School</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohbuchi, Yoshifumi; Iida, Haruhiko</p> <p></p> <p>The new educational <span class="hlt">methods</span> of engineering design have attempted to improve mechanical engineering education for graduate students in a way of the collaboration in education of engineer and designer. The education program is based on the lecture and practical exercises concerning the product design, and has engineering themes and design process themes, i.e. project management, QFD, TRIZ, robust design (<span class="hlt">Taguchi</span> <span class="hlt">method</span>) , ergonomics, usability, marketing, conception etc. At final exercise, all students were able to design new product related to their own research theme by <span class="hlt">applying</span> learned knowledge and techniques. By the <span class="hlt">method</span> of engineering design education, we have confirmed that graduate students are able to experience technological and creative interest.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020091875','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020091875"><span>The Fractional Step <span class="hlt">Method</span> <span class="hlt">Applied</span> to Simulations of Natural Convective Flows</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)</p> <p>2002-01-01</p> <p>This paper describes research done to <span class="hlt">apply</span> the Fractional Step <span class="hlt">Method</span> to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step <span class="hlt">Method</span> has been <span class="hlt">applied</span> commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step <span class="hlt">Method</span> offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step <span class="hlt">Method</span> has particular benefits for predicting flows in a directionally solidified alloy, since other <span class="hlt">methods</span> presently employed are not very efficient. Previously, the most suitable <span class="hlt">method</span> for predicting flows in a directionally solidified binary alloy was the penalty <span class="hlt">method</span>. The penalty <span class="hlt">method</span> requires direct matrix solvers, due to the penalty term. The Fractional Step <span class="hlt">Method</span> allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step <span class="hlt">Method</span> also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA229488','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA229488"><span>Manufacturing Research: Self-Directed Control</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1991-01-01</p> <p>reduce this sensitivity. SDO is performing <span class="hlt">Taguchi</span>’s parameter design . 1-13 Statistical Process Control SPC techniques will be used to monitor the process...Florida,R.E. Krieger Pub. Co., 1988. Dehnad, Khowrow, Quality Control . Robust Design . and the <span class="hlt">Taguchi</span> <span class="hlt">Method</span>, Pacific Grove, California, Wadsworth... control system. This turns out to be a non -trivial exercise. A human operator can see an event occur (such as the vessel pressurizing above its setpoint</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=mixed+AND+methods&pg=3&id=EJ1163145','ERIC'); return false;" href="https://eric.ed.gov/?q=mixed+AND+methods&pg=3&id=EJ1163145"><span><span class="hlt">Applying</span> the Mixed <span class="hlt">Methods</span> Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.</p> <p>2018-01-01</p> <p>Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed <span class="hlt">methods</span> framework for creating and validating measures. Examples <span class="hlt">applying</span> IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews <span class="hlt">applied</span> to the development of the Transformative…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=pollution+AND+metals+AND+heavy&pg=2&id=ED038844','ERIC'); return false;" href="https://eric.ed.gov/?q=pollution+AND+metals+AND+heavy&pg=2&id=ED038844"><span>A <span class="hlt">Method</span> of Measuring the Costs and Benefits of <span class="hlt">Applied</span> Research.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sprague, John W.</p> <p></p> <p>The Bureau of Mines studied the application of the concepts and <span class="hlt">methods</span> of cost-benefit analysis to the problem of ranking alternative <span class="hlt">applied</span> research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JIEI....9....1B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JIEI....9....1B"><span>A neuro-data envelopment analysis approach for optimization of uncorrelated multiple response problems with smaller the better type controllable factors</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed</p> <p>2013-11-01</p> <p>In this paper, a new <span class="hlt">method</span> is proposed to optimize a multi-response optimization problem based on the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an optimal solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is <span class="hlt">applied</span> for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed <span class="hlt">method</span> is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=%22Research+Methods+in+Applied+Linguistics%22&id=EJ613100','ERIC'); return false;" href="https://eric.ed.gov/?q=%22Research+Methods+in+Applied+Linguistics%22&id=EJ613100"><span>Trends in Research <span class="hlt">Methods</span> in <span class="hlt">Applied</span> Linguistics: China and the West.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Yihong, Gao; Lichun, Li; Jun, Lu</p> <p>2001-01-01</p> <p>Examines and compares current trends in <span class="hlt">applied</span> linguistics (AL) research <span class="hlt">methods</span> in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJTJE..33..395H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJTJE..33..395H"><span>An Integrated Optimization Design <span class="hlt">Method</span> Based on Surrogate Modeling <span class="hlt">Applied</span> to Diverging Duct Design</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hanan, Lu; Qiushi, Li; Shaobin, Li</p> <p>2016-12-01</p> <p>This paper presents an integrated optimization design <span class="hlt">method</span> in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and <span class="hlt">applied</span> to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The <span class="hlt">method</span> has been <span class="hlt">applied</span> to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The <span class="hlt">method</span> of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also <span class="hlt">applied</span> to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740022929','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740022929"><span>A study of two statistical <span class="hlt">methods</span> as <span class="hlt">applied</span> to shuttle solid rocket booster expenditures</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Perlmutter, M.; Huang, Y.; Graves, M.</p> <p>1974-01-01</p> <p>The state probability technique and the Monte Carlo technique are <span class="hlt">applied</span> to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical <span class="hlt">methods</span> were <span class="hlt">applied</span> in the analysis: (1) state probability <span class="hlt">method</span> which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation <span class="hlt">method</span> or the Monte Carlo technique. It was found that the model simulation <span class="hlt">method</span> was easier to formulate while the state probability <span class="hlt">method</span> required less computing time and was more accurate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26430454','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26430454"><span>Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using <span class="hlt">Taguchi</span> and Box-Behnken design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Emami, J; Mohiti, H; Hamishehkar, H; Varshosaz, J</p> <p>2015-01-01</p> <p>Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion <span class="hlt">method</span>. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. <span class="hlt">Taguchi</span> design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7(®) software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4578209','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4578209"><span>Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using <span class="hlt">Taguchi</span> and Box-Behnken design</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Emami, J.; Mohiti, H.; Hamishehkar, H.; Varshosaz, J.</p> <p>2015-01-01</p> <p>Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion <span class="hlt">method</span>. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. <span class="hlt">Taguchi</span> design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7® software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910015989','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910015989"><span><span class="hlt">Methods</span> of <span class="hlt">applied</span> dynamics</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rheinfurth, M. H.; Wilson, H. B.</p> <p>1991-01-01</p> <p>The monograph was prepared to give the practicing engineer a clear understanding of dynamics with special consideration given to the dynamic analysis of aerospace systems. It is conceived to be both a desk-top reference and a refresher for aerospace engineers in government and industry. It could also be used as a supplement to standard texts for in-house training courses on the subject. Beginning with the basic concepts of kinematics and dynamics, the discussion proceeds to treat the dynamics of a system of particles. Both classical and modern formulations of the Lagrange equations, including constraints, are discussed and <span class="hlt">applied</span> to the dynamic modeling of aerospace structures using the modal synthesis technique.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27387139','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27387139"><span><span class="hlt">Applying</span> Mathematical Optimization <span class="hlt">Methods</span> to an ACT-R Instance-Based Learning Model.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V</p> <p>2016-01-01</p> <p>Computational models of cognition provide an interface to connect advanced mathematical tools and <span class="hlt">methods</span> to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be <span class="hlt">applied</span> to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards <span class="hlt">applying</span> more powerful derivative-based optimization <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070038363','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070038363"><span>Data Mining <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stolzer, Alan J.; Halford, Carl</p> <p>2007-01-01</p> <p>In a previous study, multiple regression techniques were <span class="hlt">applied</span> to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression <span class="hlt">methods</span>, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining <span class="hlt">methods</span> were more effective in predicting fuel consumption. Classification and Regression Tree <span class="hlt">methods</span> reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhyA..410..609M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhyA..410..609M"><span>Local regression type <span class="hlt">methods</span> <span class="hlt">applied</span> to the study of geophysics and high frequency financial data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mariani, M. C.; Basu, K.</p> <p>2014-09-01</p> <p>In this work we <span class="hlt">applied</span> locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and <span class="hlt">apply</span> this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also <span class="hlt">applied</span> the same <span class="hlt">method</span> to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall <span class="hlt">method</span> is accurate and efficient, and the Lowess approach is much more desirable than the Loess <span class="hlt">method</span>. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/529558-applying-simulation-model-uniform-field-space-charge-distribution-measurements-pea-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/529558-applying-simulation-model-uniform-field-space-charge-distribution-measurements-pea-method"><span><span class="hlt">Applying</span> simulation model to uniform field space charge distribution measurements by the PEA <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Y.; Salama, M.M.A.</p> <p>1996-12-31</p> <p>Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) <span class="hlt">method</span> have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct <span class="hlt">method</span> has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the <span class="hlt">method</span> because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to <span class="hlt">apply</span> the direct <span class="hlt">method</span> properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to <span class="hlt">apply</span> the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct <span class="hlt">method</span> with a set of simulated signals.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MRE.....4i5301K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MRE.....4i5301K"><span>Multi-response optimization of T300/epoxy prepreg tape-wound cylinder by grey relational analysis coupled with the response surface <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kang, Chao; Shi, Yaoyao; He, Xiaodong; Yu, Tao; Deng, Bo; Zhang, Hongji; Sun, Pengcheng; Zhang, Wenbin</p> <p>2017-09-01</p> <p>This study investigates the multi-objective optimization of quality characteristics for a T300/epoxy prepreg tape-wound cylinder. The <span class="hlt">method</span> integrates the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, grey relational analysis (GRA) and response surface methodology, and is adopted to improve tensile strength and reduce residual stress. In the winding process, the main process parameters involving winding tension, pressure, temperature and speed are selected to evaluate the parametric influences on tensile strength and residual stress. Experiments are conducted using the Box-Behnken design. Based on principal component analysis, the grey relational grades are properly established to convert multi-responses into an individual objective problem. Then the response surface <span class="hlt">method</span> is used to build a second-order model of grey relational grade and predict the optimum parameters. The predictive accuracy of the developed model is proved by two test experiments with a low prediction error of less than 7%. The following process parameters, namely winding tension 124.29 N, pressure 2000 N, temperature 40 °C and speed 10.65 rpm, have the highest grey relational grade and give better quality characteristics in terms of tensile strength and residual stress. The confirmation experiment shows that better results are obtained with GRA improved by the proposed <span class="hlt">method</span> than with ordinary GRA. The proposed <span class="hlt">method</span> is proved to be feasible and can be <span class="hlt">applied</span> to optimize the multi-objective problem in the filament winding process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996JCoPh.123..379W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996JCoPh.123..379W"><span>Efficient Iterative <span class="hlt">Methods</span> <span class="hlt">Applied</span> to the Solution of Transonic Flows</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.</p> <p>1996-02-01</p> <p>We investigate the use of an inexact Newton's <span class="hlt">method</span> to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we <span class="hlt">apply</span> Newton's <span class="hlt">method</span> using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES <span class="hlt">method</span>. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative <span class="hlt">method</span> on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI <span class="hlt">method</span> (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton <span class="hlt">method</span> is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421620-ensemble-grouping-strategies-embedded-stochastic-collocation-methods-applied-anisotropic-diffusion-problems','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421620-ensemble-grouping-strategies-embedded-stochastic-collocation-methods-applied-anisotropic-diffusion-problems"><span>Ensemble Grouping Strategies for Embedded Stochastic Collocation <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Anisotropic Diffusion Problems</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>D'Elia, M.; Edwards, H. C.; Hu, J.</p> <p></p> <p>Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation <span class="hlt">methods</span> [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when <span class="hlt">applied</span> to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » <span class="hlt">applying</span> iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation <span class="hlt">methods</span> <span class="hlt">applied</span> to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1421620-ensemble-grouping-strategies-embedded-stochastic-collocation-methods-applied-anisotropic-diffusion-problems','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1421620-ensemble-grouping-strategies-embedded-stochastic-collocation-methods-applied-anisotropic-diffusion-problems"><span>Ensemble Grouping Strategies for Embedded Stochastic Collocation <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Anisotropic Diffusion Problems</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>D'Elia, M.; Edwards, H. C.; Hu, J.; ...</p> <p>2018-01-18</p> <p>Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation <span class="hlt">methods</span> [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when <span class="hlt">applied</span> to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » <span class="hlt">applying</span> iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation <span class="hlt">methods</span> <span class="hlt">applied</span> to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004JaJAP..43.3146M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004JaJAP..43.3146M"><span><span class="hlt">Applying</span> the Multiple Signal Classification <span class="hlt">Method</span> to Silent Object Detection Using Ambient Noise</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mori, Kazuyoshi; Yokoyama, Tomoki; Hasegawa, Akio; Matsuda, Minoru</p> <p>2004-05-01</p> <p>The revolutionary concept of using ocean ambient noise positively to detect objects, called acoustic daylight imaging, has attracted much attention. The authors attempted the detection of a silent target object using ambient noise and a wide-band beam former consisting of an array of receivers. In experimental results obtained in air, using the wide-band beam former, we successfully <span class="hlt">applied</span> the delay-sum array (DSA) <span class="hlt">method</span> to detect a silent target object in an acoustic noise field generated by a large number of transducers. This paper reports some experimental results obtained by <span class="hlt">applying</span> the multiple signal classification (MUSIC) <span class="hlt">method</span> to a wide-band beam former to detect silent targets. The ocean ambient noise was simulated by transducers decentralized to many points in air. Both MUSIC and DSA detected a spherical target object in the noise field. The relative power levels near the target obtained with MUSIC were compared with those obtained by DSA. Then the effectiveness of the MUSIC <span class="hlt">method</span> was evaluated according to the rate of increase in the maximum and minimum relative power levels.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/6070885-methods-evaluating-biological-impact-potentially-toxic-waste-applied-soils','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6070885-methods-evaluating-biological-impact-potentially-toxic-waste-applied-soils"><span><span class="hlt">Methods</span> for evaluating the biological impact of potentially toxic waste <span class="hlt">applied</span> to soils</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Neuhauser, E.F.; Loehr, R.C.; Malecki, M.R.</p> <p>1985-12-01</p> <p>The study was designed to evaluate two <span class="hlt">methods</span> that can be used to estimate the biological impact of organics and inorganics that may be in wastes <span class="hlt">applied</span> to land for treatment and disposal. The two <span class="hlt">methods</span> were the contact test and the artificial soil test. The contact test is a 48 hr test using an adult worm, a small glass vial, and filter paper to which the test chemical or waste is <span class="hlt">applied</span>. The test is designed to provide close contact between the worm and a chemical similar to the situation in soils. The <span class="hlt">method</span> provides a rapid estimate ofmore » the relative toxicity of chemicals and industrial wastes. The artificial soil test uses a mixture of sand, kaolin, peat, and calcium carbonate as a representative soil. Different concentrations of the test material are added to the artificial soil, adult worms are added and worm survival is evaluated after two weeks. These studies have shown that: earthworms can distinguish between a wide variety of chemicals with a high degree of accuracy.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27695757','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27695757"><span>Nutrient Runoff Losses from Liquid Dairy Manure <span class="hlt">Applied</span> with Low-Disturbance <span class="hlt">Methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jokela, William; Sherman, Jessica; Cavadini, Jason</p> <p>2016-09-01</p> <p>Manure <span class="hlt">applied</span> to cropland is a source of phosphorus (P) and nitrogen (N) in surface runoff and can contribute to impairment of surface waters. Tillage immediately after application incorporates manure into the soil, which may reduce nutrient loss in runoff as well as N loss via NH volatilization. However, tillage also incorporates crop residue, which reduces surface cover and may increase erosion potential. We <span class="hlt">applied</span> liquid dairy manure in a silage corn ( L.)-cereal rye ( L.) cover crop system in late October using <span class="hlt">methods</span> designed to incorporate manure with minimal soil and residue disturbance. These include strip-till injection and tine aerator-band manure application, which were compared with standard broadcast application, either incorporated with a disk or left on the surface. Runoff was generated with a portable rainfall simulator (42 mm h for 30 min) three separate times: (i) 2 to 5 d after the October manure application, (ii) in early spring, and (iii) after tillage and planting. In the postmanure application runoff, the highest losses of total P and dissolved reactive P were from surface-<span class="hlt">applied</span> manure. Dissolved P loss was reduced 98% by strip-till injection; this result was not statistically different from the no-manure control. Reductions from the aerator band <span class="hlt">method</span> and disk incorporation were 53 and 80%, respectively. Total P losses followed a similar pattern, with 87% reduction from injected manure. Runoff losses of N had generally similar patterns to those of P. Losses of P and N were, in most cases, lower in the spring rain simulations with fewer significant treatment effects. Overall, results show that low-disturbance manure application <span class="hlt">methods</span> can significantly reduce nutrient runoff losses compared with surface application while maintaining residue cover better than incorporation by tillage. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1334889-applying-scientific-method-cybersecurity-research','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1334889-applying-scientific-method-cybersecurity-research"><span><span class="hlt">Applying</span> the Scientific <span class="hlt">Method</span> of Cybersecurity Research</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Tardiff, Mark F.; Bonheyo, George T.; Cort, Katherine A.</p> <p></p> <p>The cyber environment has rapidly evolved from a curiosity to an essential component of the contemporary world. As the cyber environment has expanded and become more complex, so have the nature of adversaries and styles of attacks. Today, cyber incidents are an expected part of life. As a result, cybersecurity research emerged to address adversarial attacks interfering with or preventing normal cyber activities. Historical response to cybersecurity attacks is heavily skewed to tactical responses with an emphasis on rapid recovery. While threat mitigation is important and can be time critical, a knowledge gap exists with respect to developing the sciencemore » of cybersecurity. Such a science will enable the development and testing of theories that lead to understanding the broad sweep of cyber threats and the ability to assess trade-offs in sustaining network missions while mitigating attacks. The Asymmetric Resilient Cybersecurity Initiative at Pacific Northwest National Laboratory is a multi-year, multi-million dollar investment to develop approaches for shifting the advantage to the defender and sustaining the operability of systems under attack. The initiative established a Science Council to focus attention on the research process for cybersecurity. The Council shares science practices, critiques research plans, and aids in documenting and reporting reproducible research results. The Council members represent ecology, economics, statistics, physics, computational chemistry, microbiology and genetics, and geochemistry. This paper reports the initial work of the Science Council to implement the scientific <span class="hlt">method</span> in cybersecurity research. The second section describes the scientific <span class="hlt">method</span>. The third section in this paper discusses scientific practices for cybersecurity research. Section four describes initial impacts of <span class="hlt">applying</span> the science practices to cybersecurity research.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16566496','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16566496"><span>Teaching organization theory for healthcare management: three <span class="hlt">applied</span> learning <span class="hlt">methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Olden, Peter C</p> <p>2006-01-01</p> <p>Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and <span class="hlt">methods</span> with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three <span class="hlt">applied</span> teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT <span class="hlt">applied</span> to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA553335','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA553335"><span>An Ultrasonic Guided Wave <span class="hlt">Method</span> to Estimate <span class="hlt">Applied</span> Biaxial Loads (Preprint)</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2011-11-01</p> <p>VALIDATION A fatigue test was performed with an array of six surface-bonded PZT transducers on a 6061 aluminum plate as shown in Figure 4. The specimen...direct paths of propagation are oriented at different angles. This <span class="hlt">method</span> is <span class="hlt">applied</span> to experimental sparse array data recorded during a fatigue test...and the additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/263368-efficient-iterative-methods-applied-solution-transonic-flows','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/263368-efficient-iterative-methods-applied-solution-transonic-flows"><span>Efficient iterative <span class="hlt">methods</span> <span class="hlt">applied</span> to the solution of transonic flows</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.</p> <p>1996-02-01</p> <p>We investigate the use of an inexact Newton`s <span class="hlt">method</span> to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we <span class="hlt">apply</span> Newton`s <span class="hlt">method</span> using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES <span class="hlt">method</span>. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative <span class="hlt">method</span> on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI <span class="hlt">method</span> (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton <span class="hlt">method</span> is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014MatSP..32..136B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014MatSP..32..136B"><span>Optimising sulfuric acid hard coat anodising for an Al-Mg-Si wrought aluminium alloy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bartolo, N.; Sinagra, E.; Mallia, B.</p> <p>2014-06-01</p> <p>This research evaluates the effects of sulfuric acid hard coat anodising parameters, such as acid concentration, electrolyte temperature, current density and time, on the hardness and thickness of the resultant anodised layers. A small scale anodising facility was designed and set up to enable experimental investigation of the anodising parameters. An experimental design using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> to optimise the parameters within an established operating window was performed. Qualitative and quantitative <span class="hlt">methods</span> of characterisation of the resultant anodised layers were carried out. The anodised layer's thickness, and morphology were determined using a light optical microscope (LOM) and field emission gun scanning electron microscope (FEG-SEM). Hardness measurements were carried out using a nano hardness tester. Correlations between the various anodising parameters and their effect on the hardness and thickness of the anodised layers were established. Careful evaluation of these effects enabled optimum parameters to be determined using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, which were verified experimentally. Anodised layers having hardness varying between 2.4-5.2 GPa and a thickness of between 20-80 μm were produced. The <span class="hlt">Taguchi</span> <span class="hlt">method</span> was shown to be applicable to anodising. This finding could facilitate on-going and future research and development of anodising, which is attracting remarkable academic and industrial interest.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H21L..08H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H21L..08H"><span>Flood Hazard Mapping by <span class="hlt">Applying</span> Fuzzy TOPSIS <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.</p> <p>2017-12-01</p> <p>There are lots of technical <span class="hlt">methods</span> to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to <span class="hlt">apply</span> MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each <span class="hlt">applied</span> elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can <span class="hlt">apply</span> the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3481619','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3481619"><span><span class="hlt">Applying</span> Activity Based Costing (ABC) <span class="hlt">Method</span> to Calculate Cost Price in Hospital and Remedy Services</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rajabi, A; Dabiri, A</p> <p>2012-01-01</p> <p>Background Activity Based Costing (ABC) is one of the new <span class="hlt">methods</span> began appearing as a costing methodology in the 1990’s. It calculates cost price by determining the usage of resources. In this study, ABC <span class="hlt">method</span> was used for calculating cost price of remedial services in hospitals. <span class="hlt">Methods</span>: To <span class="hlt">apply</span> ABC <span class="hlt">method</span>, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis <span class="hlt">method</span>. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. Results: The cost price from ABC <span class="hlt">method</span> significantly differs from tariff <span class="hlt">method</span>. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Conclusion: Cost price of remedial services with tariff <span class="hlt">method</span> is not properly calculated when compared with ABC <span class="hlt">method</span>. ABC calculates cost price by <span class="hlt">applying</span> suitable mechanisms but tariff <span class="hlt">method</span> is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services. PMID:23113171</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JJSEE..56.2.54A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JJSEE..56.2.54A"><span>Proposal and Evaluation of Management <span class="hlt">Method</span> for College Mechatronics Education <span class="hlt">Applying</span> the Project Management</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto</p> <p></p> <p>In this research, we proposed and evaluated a management <span class="hlt">method</span> of college mechatronics education. We <span class="hlt">applied</span> the project management to college mechatronics education. We practiced our management <span class="hlt">method</span> to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management <span class="hlt">method</span> by means of questionnaire.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/21255388-real-time-parameter-estimation-method-applied-mimo-process-its-comparison-offline-identification-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21255388-real-time-parameter-estimation-method-applied-mimo-process-its-comparison-offline-identification-method"><span>Real-Time Parameter Estimation <span class="hlt">Method</span> <span class="hlt">Applied</span> to a MIMO Process and its Comparison with an Offline Identification <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk</p> <p>2009-01-12</p> <p>An experiment based <span class="hlt">method</span> is proposed for parameter estimation of a class of linear multivariable systems. The <span class="hlt">method</span> was <span class="hlt">applied</span> to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares <span class="hlt">method</span>. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation <span class="hlt">method</span>, we also implemented anmore » offline identification <span class="hlt">method</span> using artificial neural networks and obtained fairly good results. The proposed <span class="hlt">methods</span> can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12398440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12398440"><span>The complex phase gradient <span class="hlt">method</span> <span class="hlt">applied</span> to leaky Lamb waves.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lenoir, O; Conoir, J M; Izbicki, J L</p> <p>2002-10-01</p> <p>The classical phase gradient <span class="hlt">method</span> <span class="hlt">applied</span> to the characterization of the angular resonances of an immersed elastic plate, i.e., the angular poles of its reflection coefficient R, was proved to be efficient when their real parts are close to the real zeros of R and their imaginary parts are not too large compared to their real parts. This <span class="hlt">method</span> consists of plotting the partial reflection coefficient phase derivative with respect to the sine of the incidence angle, considered as real, versus incidence angle. In the vicinity of a resonance, this curve exhibits a Breit-Wigner shape, whose minimum is located at the pole real part and whose amplitude is the inverse of its imaginary part. However, when the imaginary part is large, this <span class="hlt">method</span> is not sufficiently accurate compared to the exact calculation of the complex angular root. An improvement of this <span class="hlt">method</span> consists of plotting, in 3D, in the complex angle plane and at a given frequency, the angular phase derivative with respect to the real part of the sine of the incidence angle, considered as complex. When the angular pole is reached, the 3D curve shows a clear-cut transition whose position is easily obtained.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3123264','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3123264"><span>Analytical <span class="hlt">methods</span> <span class="hlt">applied</span> to diverse types of Brazilian propolis</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2011-01-01</p> <p>Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. <span class="hlt">Methods</span> used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic <span class="hlt">methods</span> <span class="hlt">applied</span> to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common <span class="hlt">methods</span> employed and overviews of their relative results are presented. PMID:21631940</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2008M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2008M"><span>Optimization of MR fluid Yield stress using <span class="hlt">Taguchi</span> <span class="hlt">Method</span> and Response Surface Methodology Techniques</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mangal, S. K.; Sharma, Vivek</p> <p>2018-02-01</p> <p>Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of <span class="hlt">applied</span> magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..APR.J6007R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..APR.J6007R"><span>Where do Students Go Wrong in <span class="hlt">Applying</span> the Scientific <span class="hlt">Method</span>?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rubbo, Louis; Moore, Christopher</p> <p>2015-04-01</p> <p>Non-science majors completing a liberal arts degree are frequently required to take a science course. Ideally with the completion of a required science course, liberal arts students should demonstrate an improved capability in the application of the scientific <span class="hlt">method</span>. In previous work we have demonstrated that this is possible if explicit instruction is spent on the development of scientific reasoning skills. However, even with explicit instruction, students still struggle to <span class="hlt">apply</span> the scientific process. Counter to our expectations, the difficulty is not isolated to a single issue such as stating a testable hypothesis, designing an experiment, or arriving at a supported conclusion. Instead students appear to struggle with every step in the process. This talk summarizes our work looking at and identifying where students struggle in the application of the scientific <span class="hlt">method</span>. This material is based upon work supported by the National Science Foundation under Grant No. 1244801.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930092280','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930092280"><span>Extrapolation techniques <span class="hlt">applied</span> to matrix <span class="hlt">methods</span> in neutron diffusion problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mccready, Robert R</p> <p>1956-01-01</p> <p>A general matrix <span class="hlt">method</span> is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix <span class="hlt">method</span> is <span class="hlt">applied</span> to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=miyamoto&pg=3&id=EJ281444','ERIC'); return false;" href="https://eric.ed.gov/?q=miyamoto&pg=3&id=EJ281444"><span>A Technique of Two-Stage Clustering <span class="hlt">Applied</span> to Environmental and Civil Engineering and Related <span class="hlt">Methods</span> of Citation Analysis.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Miyamoto, S.; Nakayama, K.</p> <p>1983-01-01</p> <p>A <span class="hlt">method</span> of two-stage clustering of literature based on citation frequency is <span class="hlt">applied</span> to 5,065 articles from 57 journals in environmental and civil engineering. Results of related <span class="hlt">methods</span> of citation analysis (hierarchical graph, clustering of journals, multidimensional scaling) <span class="hlt">applied</span> to same set of articles are compared. Ten references are…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5191070','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5191070"><span>An IMU-to-Body Alignment <span class="hlt">Method</span> <span class="hlt">Applied</span> to Human Gait Analysis</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo</p> <p>2016-01-01</p> <p>This paper presents a novel calibration procedure as a simple, yet powerful, <span class="hlt">method</span> to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed <span class="hlt">method</span> is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration <span class="hlt">method</span> is <span class="hlt">applied</span>, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the <span class="hlt">method</span> also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27973406','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27973406"><span>An IMU-to-Body Alignment <span class="hlt">Method</span> <span class="hlt">Applied</span> to Human Gait Analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo</p> <p>2016-12-10</p> <p>This paper presents a novel calibration procedure as a simple, yet powerful, <span class="hlt">method</span> to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed <span class="hlt">method</span> is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration <span class="hlt">method</span> is <span class="hlt">applied</span>, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the <span class="hlt">method</span> also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010086238','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010086238"><span>Analysis of Preconditioning and Relaxation Operators for the Discontinuous Galerkin <span class="hlt">Method</span> <span class="hlt">Applied</span> to Diffusion</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Atkins, H. L.; Shu, Chi-Wang</p> <p>2001-01-01</p> <p>The explicit stability constraint of the discontinuous Galerkin <span class="hlt">method</span> <span class="hlt">applied</span> to the diffusion operator decreases dramatically as the order of the <span class="hlt">method</span> is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for <span class="hlt">methods</span> of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the <span class="hlt">method</span>. Local relaxation <span class="hlt">methods</span> are constructed that rapidly damp high frequencies for arbitrarily large time step.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PhDT.......202D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PhDT.......202D"><span><span class="hlt">Applying</span> multi-resolution numerical <span class="hlt">methods</span> to geodynamics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Davies, David Rhodri</p> <p></p> <p>Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational <span class="hlt">methods</span>. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element <span class="hlt">methods</span> from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two <span class="hlt">methods</span>. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such <span class="hlt">methods</span> improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is <span class="hlt">applied</span> to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such <span class="hlt">methods</span>. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1439953-subplane-collision-probabilities-method-applied-control-rod-cusping','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1439953-subplane-collision-probabilities-method-applied-control-rod-cusping"><span>Subplane collision probabilities <span class="hlt">method</span> <span class="hlt">applied</span> to control rod cusping in 2D/1D</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Graham, Aaron M.; Collins, Benjamin S.; Stimpson, Shane G.</p> <p></p> <p>The MPACT code is being jointly developed by the University of Michigan and Oak Ridge National Laboratory. It uses the 2D/1D <span class="hlt">method</span> to solve neutron transport problems for reactors. The 2D/1D <span class="hlt">method</span> decomposes the problem into a stack of 2D planes, and uses a high fidelity transport <span class="hlt">method</span> to resolve all heterogeneity in each plane. These planes are then coupled axially using a lower order solver. Using this scheme, 3D solutions to the transport equation can be obtained at a much lower cost.One assumption made by the 2D/1D <span class="hlt">method</span> is that the materials are axially homogeneous for each 2D plane.more » Violation of this assumption requires homogenization, which can significantly reduce the accuracy of the calculation. This paper presents two new subgrid <span class="hlt">methods</span> to address this issue. The first <span class="hlt">method</span> is polynomial decusping, a simple correction used to address control rods partially inserted into a 2D plane. The second is the subplane collision probabilities <span class="hlt">method</span>, which is a more accurate, more robust subgrid <span class="hlt">method</span> that can be <span class="hlt">applied</span> to other axial heterogeneities.Each <span class="hlt">method</span> was <span class="hlt">applied</span> to a variety of problems. Results were compared to fine mesh solutions which had no axial heterogeneity and to Monte Carlo reference solutions generated using KENO-VI. It was shown that the polynomial decusping <span class="hlt">method</span> was effective in many cases, but it had some limitations, with 3D pin power errors as high as 25% compared to KENO-VI. In conclusion, the subplane collision probabilities <span class="hlt">method</span> performed much better, lowering the maximum pin power error to less than 5% in every calculation.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1439953-subplane-collision-probabilities-method-applied-control-rod-cusping','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1439953-subplane-collision-probabilities-method-applied-control-rod-cusping"><span>Subplane collision probabilities <span class="hlt">method</span> <span class="hlt">applied</span> to control rod cusping in 2D/1D</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Graham, Aaron M.; Collins, Benjamin S.; Stimpson, Shane G.; ...</p> <p>2018-04-06</p> <p>The MPACT code is being jointly developed by the University of Michigan and Oak Ridge National Laboratory. It uses the 2D/1D <span class="hlt">method</span> to solve neutron transport problems for reactors. The 2D/1D <span class="hlt">method</span> decomposes the problem into a stack of 2D planes, and uses a high fidelity transport <span class="hlt">method</span> to resolve all heterogeneity in each plane. These planes are then coupled axially using a lower order solver. Using this scheme, 3D solutions to the transport equation can be obtained at a much lower cost.One assumption made by the 2D/1D <span class="hlt">method</span> is that the materials are axially homogeneous for each 2D plane.more » Violation of this assumption requires homogenization, which can significantly reduce the accuracy of the calculation. This paper presents two new subgrid <span class="hlt">methods</span> to address this issue. The first <span class="hlt">method</span> is polynomial decusping, a simple correction used to address control rods partially inserted into a 2D plane. The second is the subplane collision probabilities <span class="hlt">method</span>, which is a more accurate, more robust subgrid <span class="hlt">method</span> that can be <span class="hlt">applied</span> to other axial heterogeneities.Each <span class="hlt">method</span> was <span class="hlt">applied</span> to a variety of problems. Results were compared to fine mesh solutions which had no axial heterogeneity and to Monte Carlo reference solutions generated using KENO-VI. It was shown that the polynomial decusping <span class="hlt">method</span> was effective in many cases, but it had some limitations, with 3D pin power errors as high as 25% compared to KENO-VI. In conclusion, the subplane collision probabilities <span class="hlt">method</span> performed much better, lowering the maximum pin power error to less than 5% in every calculation.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26950503','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26950503"><span>Different spectrophotometric <span class="hlt">methods</span> <span class="hlt">applied</span> for the analysis of binary mixture of flucloxacillin and amoxicillin: A comparative study.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed</p> <p>2016-05-15</p> <p>Three different spectrophotometric <span class="hlt">methods</span> were <span class="hlt">applied</span> for the quantitative analysis of flucloxacillin and amoxicillin in their binary mixture, namely, ratio subtraction, absorbance subtraction and amplitude modulation. A comparative study was done listing the advantages and the disadvantages of each <span class="hlt">method</span>. All the <span class="hlt">methods</span> were validated according to the ICH guidelines and the obtained accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed <span class="hlt">methods</span> was tested using laboratory prepared mixtures and assessed by <span class="hlt">applying</span> the standard addition technique. So, they can be used for the routine analysis of flucloxacillin and amoxicillin in their binary mixtures. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFD.L2012Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFD.L2012Z"><span>Scalable <span class="hlt">Methods</span> for Eulerian-Lagrangian Simulation <span class="hlt">Applied</span> to Compressible Multiphase Flows</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zwick, David; Hackl, Jason; Balachandar, S.</p> <p>2017-11-01</p> <p>Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian <span class="hlt">method</span>. While useful, this <span class="hlt">method</span> can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and <span class="hlt">apply</span> it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the <span class="hlt">methods</span> presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810016673','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810016673"><span><span class="hlt">Method</span> for <span class="hlt">applying</span> photographic resists to otherwise incompatible substrates</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fuhr, W. (Inventor)</p> <p>1981-01-01</p> <p>A <span class="hlt">method</span> for <span class="hlt">applying</span> photographic resists to otherwise incompatible substrates, such as a baking enamel paint surface, is described wherein the uncured enamel paint surface is coated with a non-curing lacquer which is, in turn, coated with a partially cured lacquer. The non-curing lacquer adheres to the enamel and a photo resist material satisfactorily adheres to the partially cured lacquer. Once normal photo etching techniques are employed the lacquer coats can be easily removed from the enamel leaving the photo etched image. In the case of edge lighted instrument panels, a coat of uncured enamel is placed over the cured enamel followed by the lacquer coats and the photo resists which is exposed and developed. Once the etched uncured enamel is cured, the lacquer coats are removed leaving an etched panel.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JAnSc..64..333H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JAnSc..64..333H"><span>Parallel Implicit Runge-Kutta <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Coupled Orbit/Attitude Propagation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hatten, Noble; Russell, Ryan P.</p> <p>2017-12-01</p> <p>A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is <span class="hlt">applied</span> to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep <span class="hlt">methods</span> are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be <span class="hlt">applied</span> across multiple trajectories.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950004813','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950004813"><span>A note on the accuracy of spectral <span class="hlt">method</span> <span class="hlt">applied</span> to nonlinear conservation laws</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shu, Chi-Wang; Wong, Peter S.</p> <p>1994-01-01</p> <p>Fourier spectral <span class="hlt">method</span> can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral <span class="hlt">method</span> produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral <span class="hlt">method</span> <span class="hlt">applied</span> to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005ammt.book.....M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005ammt.book.....M"><span><span class="hlt">Applied</span> Mathematical <span class="hlt">Methods</span> in Theoretical Physics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Masujima, Michio</p> <p>2005-04-01</p> <p>All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the <span class="hlt">method</span> illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and <span class="hlt">applied</span> mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7298E..17V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7298E..17V"><span>Six Sigma <span class="hlt">methods</span> <span class="hlt">applied</span> to cryogenic coolers assembly line</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René</p> <p>2009-05-01</p> <p>Six Sigma <span class="hlt">method</span> have been <span class="hlt">applied</span> to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been <span class="hlt">applied</span> to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830003225','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830003225"><span>The transfer function <span class="hlt">method</span> for gear system dynamics <span class="hlt">applied</span> to conventional and minimum excitation gearing designs</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mark, W. D.</p> <p>1982-01-01</p> <p>A transfer function <span class="hlt">method</span> for predicting the dynamic responses of gear systems with more than one gear mesh is developed and <span class="hlt">applied</span> to the NASA Lewis four-square gear fatigue test apparatus. <span class="hlt">Methods</span> for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design <span class="hlt">method</span> to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design <span class="hlt">method</span> is <span class="hlt">applied</span> to a pair of test gears.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3240027L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3240027L"><span>Design and operation of a bio-inspired micropump based on blood-sucking mechanism of mosquitoes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Leu, Tzong-Shyng; Kao, Ruei-Hung</p> <p>2018-05-01</p> <p>The study is to develop a novel bionic micropump, mimicking blood-suck mechanism of mosquitos with a similar efficiency of 36%. The micropump is produced by using micro-electro-mechanical system (MEMS) technology, PDMS (polydimethylsiloxane) to fabricate the microchannel, and an actuator membrane made by Fe-PDMS. It employs an Nd-FeB permanent magnet and PZT to actuate the Fe-PDMS membrane for generating flow rate. A lumped model theory and the <span class="hlt">Taguchi</span> <span class="hlt">method</span> are used for numerical simulation of pulsating flow in the micropump. Also focused is to change the size of mosquito mouth for identifying the best waveform for the transient flow processes. Based on computational results of channel size and the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, an optimization actuation waveform is identified. The maximum pumping flow rate is 23.5 μL/min and the efficiency is 86%. The power density of micropump is about 8 times of that produced by mosquito’s suction. In addition to using theoretical design of the channel size, also combine with <span class="hlt">Taguchi</span> <span class="hlt">method</span> and asymmetric actuation to find the optimization actuation waveform, the experimental result shows the maximum pumping flowrate is 23.5 μL/min and efficiency is 86%, moreover, the power density of micropump is 8 times higher than mosquito’s.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22290406-electrochemical-synthesis-characterization-zinc-oxalate-nanoparticles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22290406-electrochemical-synthesis-characterization-zinc-oxalate-nanoparticles"><span>Electrochemical synthesis and characterization of zinc oxalate nanoparticles</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shamsipur, Mojtaba, E-mail: mshamsipur@yahoo.com; Roushani, Mahmoud; Department of Chemistry, Ilam University, Ilam</p> <p>2013-03-15</p> <p>Highlights: ► Synthesis of zinc oxalate nanoparticles via electrolysis of a zinc plate anode in sodium oxalate solutions. ► Design of a <span class="hlt">Taguchi</span> orthogonal array to identify the optimal experimental conditions. ► Controlling the size and shape of particles via <span class="hlt">applied</span> voltage and oxalate concentration. ► Characterization of zinc oxalate nanoparticles by SEM, UV–vis, FT-IR and TG–DTA. - Abstract: A rapid, clean and simple electrodeposition <span class="hlt">method</span> was designed for the synthesis of zinc oxalate nanoparticles. Zinc oxalate nanoparticles in different size and shapes were electrodeposited by electrolysis of a zinc plate anode in sodium oxalate aqueous solutions. It was foundmore » that the size and shape of the product could be tuned by electrolysis voltage, oxalate ion concentration, and stirring rate of electrolyte solution. A <span class="hlt">Taguchi</span> orthogonal array design was designed to identify the optimal experimental conditions. The morphological characterization of the product was carried out by scanning electron microscopy. UV–vis and FT-IR spectroscopies were also used to characterize the electrodeposited nanoparticles. The TG–DTA studies of the nanoparticles indicated that the main thermal degradation occurs in two steps over a temperature range of 350–430 °C. In contrast to the existing <span class="hlt">methods</span>, the present study describes a process which can be easily scaled up for the production of nano-sized zinc oxalate powder.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/21147','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/21147"><span>A <span class="hlt">method</span> to evaluate performance reliability of individual subjects in laboratory research <span class="hlt">applied</span> to work settings.</span></a></p> <p><a target="_blank" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>1978-10-01</p> <p>This report presents a <span class="hlt">method</span> that may be used to evaluate the reliability of performance of individual subjects, particularly in <span class="hlt">applied</span> laboratory research. The <span class="hlt">method</span> is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29674008','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29674008"><span><span class="hlt">Applying</span> systems ergonomics <span class="hlt">methods</span> in sport: A systematic review.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hulme, Adam; Thompson, Jason; Plant, Katherine L; Read, Gemma J M; Mclean, Scott; Clacy, Amanda; Salmon, Paul M</p> <p>2018-04-16</p> <p>As sports systems become increasingly more complex, competitive, and technology-centric, there is a greater need for systems ergonomics <span class="hlt">methods</span> to consider the performance, health, and safety of athletes in context with the wider settings in which they operate. Therefore, the purpose of this systematic review was to identify and critically evaluate studies which have <span class="hlt">applied</span> a systems ergonomics research approach in the context of sports performance and injury management. Five databases (PubMed, Scopus, ScienceDirect, Web of Science, and SPORTDiscus) were searched for the dates 01 January 1990 to 01 August 2017, inclusive, for original peer-reviewed journal articles and conference papers. Reported analyses were underpinned by a recognised systems ergonomics <span class="hlt">method</span>, and study aims were related to the optimisation of sports performance (e.g. communication, playing style, technique, tactics, or equipment), and/or the management of sports injury (i.e. identification, prevention, or treatment). A total of seven articles were identified. Two articles were focussed on understanding and optimising sports performance, whereas five examined sports injury management. The <span class="hlt">methods</span> used were the Event Analysis of Systemic Teamwork, Cognitive Work Analysis (the Work Domain Analysis Abstraction Hierarchy), Rasmussen's Risk Management Framework, and the Systems Theoretic Accident Model and Processes <span class="hlt">method</span>. The individual sport application was distance running, whereas the team sports contexts examined were cycling, football, Australian Football League, and rugby union. The included systems ergonomics applications were highly flexible, covering both amateur and elite sports contexts. The studies were rated as valuable, providing descriptions of injury controls and causation, the factors influencing injury management, the allocation of responsibilities for injury prevention, as well as the factors and their interactions underpinning sports performance. Implications and future</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/27098','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/27098"><span>Critical path <span class="hlt">method</span> <span class="hlt">applied</span> to research project planning: Fire Economics Evaluation System (FEES)</span></a></p> <p><a target="_blank" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Earl B. Anderson; R. Stanton Hales</p> <p>1986-01-01</p> <p>The critical path <span class="hlt">method</span> (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was <span class="hlt">applied</span> to the development of the Forest Service's Fire...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=carbon+AND+cycle&pg=3&id=ED538566','ERIC'); return false;" href="https://eric.ed.gov/?q=carbon+AND+cycle&pg=3&id=ED538566"><span><span class="hlt">Applying</span> Item Response Theory <span class="hlt">Methods</span> to Design a Learning Progression-Based Science Assessment</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Chen, Jing</p> <p>2012-01-01</p> <p>Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study <span class="hlt">applies</span> Item Response Theory (IRT) based <span class="hlt">methods</span> to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29496467','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29496467"><span>Manufacturing of a novel double-function ssDNA aptamer for sensitive diagnosis and efficient neutralization of SEA.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sedighian, Hamid; Halabian, Raheleh; Amani, Jafar; Heiat, Mohammad; Taheri, Ramezan Ali; Imani Fooladi, Abbas Ali</p> <p>2018-05-01</p> <p>Staphylococcal enterotoxin A (SEA) is an enterotoxin produced mainly by Staphylococcus aureus. In recent years, it has become the most prevalent compound for staphylococcal food poisoning (SFP) around the world. In this study, we isolate new dual-function single-stranded DNA (ssDNA) aptamers by using some new <span class="hlt">methods</span>, such as the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, by focusing on the detection and neutralization of SEA enterotoxin in food and clinical samples. For the asymmetric polymerase chain reaction (PCR) optimization of each round of systematic evolution of ligands by exponential enrichment (SELEX), we use <span class="hlt">Taguchi</span> L9 orthogonal arrays, and the aptamer mobility shift assay (AMSA) is used for initial evaluation of the protein-DNA interactions on the last SELEX round. In our investigation the dissociation constant (K D ) value and the limit of detection (LOD) of the candidate aptamer were found to be 8.5 ± 0.91 of nM and 5 ng/ml using surface plasmon resonance (SPR). In the current study, the <span class="hlt">Taguchi</span> and mobility shift assay <span class="hlt">methods</span> were innovatively harnessed to improve the selection process and evaluate the protein-aptamer interactions. To the best of our knowledge, this is the first report on employing these two <span class="hlt">methods</span> in aptamer technology especially against bacterial toxin. Copyright © 2018 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940019946','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940019946"><span>A minimum cost tolerance allocation <span class="hlt">method</span> for rocket engines and robust rocket engine design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gerth, Richard J.</p> <p>1993-01-01</p> <p>Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize <span class="hlt">methods</span> such as Quality Function Deployment and <span class="hlt">Taguchi</span> <span class="hlt">methods</span>. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and optimization approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an optimal combination of tolerances that would result in the minimum cost without any degradation in system performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26414154','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26414154"><span>Assessment of strobilurin fungicides' content in soya-based drinks by liquid micro-extraction and liquid chromatography with tandem mass spectrometry.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Campillo, Natalia; Iniesta, María Jesús; Viñas, Pilar; Hernández-Córdoba, Manuel</p> <p>2015-01-01</p> <p>Seven strobilurin fungicides were pre-concentrated from soya-based drinks using dispersive liquid-liquid micro-extraction (DLLME) with a prior protein precipitation step in acid medium. The enriched phase was analysed by liquid chromatography (LC) with dual detection, using diode array detection (DAD) and electrospray-ion trap tandem mass spectrometry (ESI-IT-MS/MS). After selecting 1-undecanol and methanol as the extractant and disperser solvents, respectively, for DLLME, the <span class="hlt">Taguchi</span> experimental <span class="hlt">method</span>, an orthogonal array design, was <span class="hlt">applied</span> to select the optimal solvent volumes and salt concentration in the aqueous phase. The matrix effect was evaluated and quantification was carried out using external aqueous calibration for DAD and matrix-matched calibration <span class="hlt">method</span> for MS/MS. Detection limits in the 4-130 and 0.8-4.5 ng g(-1) ranges were obtained for DAD and MS/MS, respectively. The DLLME-LC-DAD-MS <span class="hlt">method</span> was <span class="hlt">applied</span> to the analysis of 10 different samples, none of which was found to contain residues of the studied fungicides.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA280808','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA280808"><span>Quality in the Operational Air Force: A Case of Misplaced Emphasis</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1994-05-01</p> <p>other quality advocates of the era. These men included Joseph Juran, Armand Feigenbaum, Kaoru Ishikawa , and Genichi <span class="hlt">Taguchi</span>. Juran contributed disciplined...planning theories, while Feigenbaum felt that producing quality could actually reduce production costs. In addition, Ishikawa and <span class="hlt">Taguchi</span> lent...statistically based problem solving techniques, but the more modem approaches of Ishikawa , <span class="hlt">Taguchi</span> and others. The operative concept of TQM is ’continuous</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/15020790','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/15020790"><span>REMARKS ON THE MAXIMUM ENTROPY <span class="hlt">METHOD</span> <span class="hlt">APPLIED</span> TO FINITE TEMPERATURE LATTICE QCD.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>UMEDA, T.; MATSUFURU, H.</p> <p>2005-07-25</p> <p>We make remarks on the Maximum Entropy <span class="hlt">Method</span> (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also <span class="hlt">apply</span> them using mock and lattice QCD data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..314a2025D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..314a2025D"><span>Optimization of friction and wear behaviour of Al7075-Al2O3-B4C metal matrix composites using <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dhanalakshmi, S.; Mohanasundararaju, N.; Venkatakrishnan, P. G.; Karthik, V.</p> <p>2018-02-01</p> <p>The present study deals with investigations relating to dry sliding wear behaviour of the Al 7075 alloy, reinforced with Al2O3 and B4C. The hybrid composites are produced through Liquid Metallurgy route - Stir casting <span class="hlt">method</span>. The amount of Al2O3 particles is varied as 3, 6, 9, 12 and 15 wt% and the amount of B4C is kept constant as 3wt%. Experiments were conducted based on the plan of experiments generated through Taguchi’s technique. A L27 Orthogonal array was selected for analysis of the data. The investigation is to find the effect of <span class="hlt">applied</span> load, sliding speed and sliding distance on wear rate and Coefficient of Friction (COF) of the hybrid Al7075- Al2O3-B4C composite and to determine the optimal parameters for obtaining minimum wear rate. The samples were examined using scanning electronic microscopy after wear testing and analyzed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090024221','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090024221"><span>Super-convergence of Discontinuous Galerkin <span class="hlt">Method</span> <span class="hlt">Applied</span> to the Navier-Stokes Equations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Atkins, Harold L.</p> <p>2009-01-01</p> <p>The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin <span class="hlt">method</span> are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin <span class="hlt">method</span> are <span class="hlt">applied</span> to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19990087092','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19990087092"><span>Metamodels for Computer-Based Engineering Design: Survey and Recommendations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.</p> <p>1997-01-01</p> <p>The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, <span class="hlt">Taguchi</span> <span class="hlt">methods</span>, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of <span class="hlt">applying</span> traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JTAM...48a..59V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JTAM...48a..59V"><span>Modified <span class="hlt">Method</span> of Simplest Equation <span class="hlt">Applied</span> to the Nonlinear Schrödinger Equation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vitanov, Nikolay K.; Dimitrova, Zlatinka I.</p> <p>2018-03-01</p> <p>We consider an extension of the methodology of the modified <span class="hlt">method</span> of simplest equation to the case of use of two simplest equations. The extended methodology is <span class="hlt">applied</span> for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28388121','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28388121"><span>Kinetic energy partition <span class="hlt">method</span> <span class="hlt">applied</span> to ground state helium-like atoms.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yu-Hsin; Chao, Sheng D</p> <p>2017-03-28</p> <p>We have used the recently developed kinetic energy partition (KEP) <span class="hlt">method</span> to solve the quantum eigenvalue problems for helium-like atoms and obtain precise ground state energies and wave-functions. The key to treating properly the electron-electron (repulsive) Coulomb potential energies for the KEP <span class="hlt">method</span> to be <span class="hlt">applied</span> is to introduce a "negative mass" term into the partitioned kinetic energy. A Hartree-like product wave-function from the subsystem wave-functions is used to form the initial trial function, and the variational search for the optimized adiabatic parameters leads to a precise ground state energy. This new approach sheds new light on the all-important problem of solving many-electron Schrödinger equations and hopefully opens a new way to predictive quantum chemistry. The results presented here give very promising evidence that an effective one-electron model can be used to represent a many-electron system, in the spirit of density functional theory.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481851','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481851"><span>A new <span class="hlt">method</span> to improve network topological similarity search: <span class="hlt">applied</span> to fold recognition</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lhota, John; Hauptman, Ruth; Hart, Thomas; Ng, Clara; Xie, Lei</p> <p>2015-01-01</p> <p>Motivation: Similarity search is the foundation of bioinformatics. It plays a key role in establishing structural, functional and evolutionary relationships between biological sequences. Although the power of the similarity search has increased steadily in recent years, a high percentage of sequences remain uncharacterized in the protein universe. Thus, new similarity search strategies are needed to efficiently and reliably infer the structure and function of new sequences. The existing paradigm for studying protein sequence, structure, function and evolution has been established based on the assumption that the protein universe is discrete and hierarchical. Cumulative evidence suggests that the protein universe is continuous. As a result, conventional sequence homology search <span class="hlt">methods</span> may be not able to detect novel structural, functional and evolutionary relationships between proteins from weak and noisy sequence signals. To overcome the limitations in existing similarity search <span class="hlt">methods</span>, we propose a new algorithmic framework—Enrichment of Network Topological Similarity (ENTS)—to improve the performance of large scale similarity searches in bioinformatics. Results: We <span class="hlt">apply</span> ENTS to a challenging unsolved problem: protein fold recognition. Our rigorous benchmark studies demonstrate that ENTS considerably outperforms state-of-the-art <span class="hlt">methods</span>. As the concept of ENTS can be <span class="hlt">applied</span> to any similarity metric, it may provide a general framework for similarity search on any set of biological entities, given their representation as a network. Availability and implementation: Source code freely available upon request Contact: lxie@iscb.org PMID:25717198</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20000033300&hterms=design+experiments+Engineering&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Ddesign%2Bexperiments%2BEngineering','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20000033300&hterms=design+experiments+Engineering&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Ddesign%2Bexperiments%2BEngineering"><span>On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.</p> <p>1997-01-01</p> <p>Perhaps the most prevalent use of statistics in engineering design is through <span class="hlt">Taguchi</span>'s parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, <span class="hlt">Taguchi</span> <span class="hlt">methods</span>, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of <span class="hlt">applying</span> traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040173214&hterms=digestion&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Ddigestion','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040173214&hterms=digestion&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Ddigestion"><span>Random-breakage mapping <span class="hlt">method</span> <span class="hlt">applied</span> to human DNA sequences</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)</p> <p>1996-01-01</p> <p>The random-breakage mapping <span class="hlt">method</span> [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was <span class="hlt">applied</span> to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1983PhRvC..28.1618S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1983PhRvC..28.1618S"><span>Resonating group <span class="hlt">method</span> as <span class="hlt">applied</span> to the spectroscopy of α-transfer reactions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Subbotin, V. B.; Semjonov, V. M.; Gridnev, K. A.; Hefter, E. F.</p> <p>1983-10-01</p> <p>In the conventional approach to α-transfer reactions the finite- and/or zero-range distorted-wave Born approximation is used in liaison with a macroscopic description of the captured α particle in the residual nucleus. Here the specific example of 16O(6Li,d)20Ne reactions at different projectile energies is taken to present a microscopic resonating group <span class="hlt">method</span> analysis of the α particle in the final nucleus (for the reaction part the simple zero-range distorted-wave Born approximation is employed). In the discussion of suitable nucleon-nucleon interactions, force number one of the effective interactions presented by Volkov is shown to be most appropriate for the system considered. Application of the continuous analog of Newton's <span class="hlt">method</span> to the evaluation of the resonating group <span class="hlt">method</span> equations yields an increased accuracy with respect to traditional <span class="hlt">methods</span>. The resonating group <span class="hlt">method</span> description induces only minor changes in the structures of the angular distributions, but it does serve its purpose in yielding reliable and consistent spectroscopic information. NUCLEAR STRUCTURE 16O(6Li,d)20Ne; E=20 to 32 MeV; calculated B(E2); reduced widths, dσdΩ extracted α-spectroscopic factors. ZRDWBA with microscope RGM description of residual α particle in 20Ne; application of continuous analog of Newton's <span class="hlt">method</span>; tested and <span class="hlt">applied</span> Volkov force No. 1; direct mechanism.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3505841','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3505841"><span>A Review of Auditing <span class="hlt">Methods</span> <span class="hlt">Applied</span> to the Content of Controlled Biomedical Terminologies</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.</p> <p>2012-01-01</p> <p>Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing <span class="hlt">methods</span> that <span class="hlt">apply</span> formal <span class="hlt">methods</span> to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these <span class="hlt">methods</span> and have created a framework for characterizing them. The framework considers manual, systematic and heuristic <span class="hlt">methods</span> that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the <span class="hlt">methods</span> and provide examples to illustrate each part of the framework. PMID:19285571</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980017415','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980017415"><span>Advanced Signal Processing <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Digital Mammography</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stauduhar, Richard P.</p> <p>1997-01-01</p> <p> without further support. Task 5: Better modeling does indeed make an improvement in the detection output. After the proposal ended, we came up with some new theoretical explanations that helps in understanding when the D4 filter should be better. This work is currently in the review process. Task 6: N/A. This no longer <span class="hlt">applies</span> in view of Tasks 4-5. Task 7: Comprehensive plans for further work have been completed. These plans are the subject of two proposals, one to NASA and one to HHS. These proposals represent plans for a complete evaluation of the <span class="hlt">methods</span> for identifying normal mammograms, augmented with significant further theoretical work.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol4/pdf/CFR-2010-title24-vol4-sec1000-54.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title24-vol4/pdf/CFR-2010-title24-vol4-sec1000-54.pdf"><span>24 CFR 1000.54 - What procedures <span class="hlt">apply</span> to complaints arising out of any of the <span class="hlt">methods</span> of providing for Indian...</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-04-01</p> <p>... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What procedures <span class="hlt">apply</span> to complaints arising out of any of the <span class="hlt">methods</span> of providing for Indian preference? 1000.54 Section 1000.54 Housing and... ACTIVITIES General § 1000.54 What procedures <span class="hlt">apply</span> to complaints arising out of any of the <span class="hlt">methods</span> of...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JMP....58i3102N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JMP....58i3102N"><span>Limitations of the background field <span class="hlt">method</span> <span class="hlt">applied</span> to Rayleigh-Bénard convection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nobili, Camilla; Otto, Felix</p> <p>2017-09-01</p> <p>We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field <span class="hlt">method</span> <span class="hlt">applied</span> to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field <span class="hlt">method</span> comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the <span class="hlt">method</span> is believed to capture the marginal stability of the boundary layer. The best available upper bound via this <span class="hlt">method</span> is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field <span class="hlt">method</span> cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another <span class="hlt">method</span>, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field <span class="hlt">method</span> is unphysical in the sense that it cannot provide the optimal bound.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JPhCS.305a2108M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JPhCS.305a2108M"><span>Coda Wave Interferometry <span class="hlt">Method</span> <span class="hlt">Applied</span> in Structural Monitoring to Assess Damage Evolution in Masonry and Concrete Structures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Masera, D.; Bocca, P.; Grazzini, A.</p> <p>2011-07-01</p> <p>In this experimental program the main goal is to monitor the damage evolution in masonry and concrete structures by Acoustic Emission (AE) signal analysis <span class="hlt">applying</span> a well-know seismic <span class="hlt">method</span>. For this reason the concept of the coda wave interferometry is <span class="hlt">applied</span> to AE signal recorded during the tests. Acoustic Emission (AE) are very effective non-destructive techniques <span class="hlt">applied</span> to identify micro and macro-defects and their temporal evolution in several materials. This technique permits to estimate the velocity of ultrasound waves propagation and the amount of energy released during fracture propagation to obtain information on the criticality of the ongoing process. By means of AE monitoring, an experimental analysis on a set of reinforced masonry walls under variable amplitude loading and strengthening reinforced concrete (RC) beams under monotonic static load has been carried out. In the reinforced masonry wall, cyclic fatigue stress has been <span class="hlt">applied</span> to accelerate the static creep and to forecast the corresponding creep behaviour of masonry under static long-time loading. During the tests, the evaluation of fracture growth is monitored by coda wave interferometry which represents a novel approach in structural monitoring based on AE relative change velocity of coda signal. In general, the sensitivity of coda waves has been used to estimate velocity changes in fault zones, in volcanoes, in a mining environment, and in ultrasound experiments. This <span class="hlt">method</span> uses multiple scattered waves, which travelled through the material along numerous paths, to infer tiny temporal changes in the wave velocity. The <span class="hlt">applied</span> <span class="hlt">method</span> has the potential to be used as a "damage-gauge" for monitoring velocity changes as a sign of damage evolution into masonry and concrete structures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JApSc...8..453L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JApSc...8..453L"><span>The Study of an Integrated Rating System for Supplier Quality Performance in the Semiconductor Industry</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Yu-Cheng; Yen, Tieh-Min; Tsai, Chih-Hung</p> <p></p> <p>This study provides an integrated model of Supplier Quality Performance Assesment (SQPA) activity for the semiconductor industry through introducing the ISO 9001 management framework, Importance-Performance Analysis (IPA) Supplier Quality Performance Assesment and <span class="hlt">Taguchi`s</span> Signal-to-Noise Ratio (S/N) techniques. This integrated model provides a SQPA methodology to create value for all members under mutual cooperation and trust in the supply chain. This <span class="hlt">method</span> helps organizations build a complete SQPA framework, linking organizational objectives and SQPA activities to optimize rating techniques to promote supplier quality improvement. The techniques used in SQPA activities are easily understood. A case involving a design house is illustrated to show our model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150018404','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150018404"><span>De-Aliasing Through Over-Integration <span class="hlt">Applied</span> to the Flux Reconstruction and Discontinuous Galerkin <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.</p> <p>2015-01-01</p> <p>High-order <span class="hlt">methods</span> are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) <span class="hlt">method</span> presents a unifying framework for a wide class of high-order <span class="hlt">methods</span> including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based <span class="hlt">methods</span> that are derived via the differential form of the governing equations. Whereas high-order <span class="hlt">methods</span> have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when <span class="hlt">applied</span> to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG <span class="hlt">methods</span>; however, their study regarding FR <span class="hlt">methods</span> has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG <span class="hlt">methods</span>, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1863.0020V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1863.0020V"><span><span class="hlt">Applying</span> the <span class="hlt">method</span> of fundamental solutions to harmonic problems with singular boundary conditions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Valtchev, Svilen S.; Alves, Carlos J. S.</p> <p>2017-07-01</p> <p>The <span class="hlt">method</span> of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be <span class="hlt">applied</span> for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed <span class="hlt">method</span> is compared with the results form the classical MFS.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1922n0003P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1922n0003P"><span>Parallel fast multipole boundary element <span class="hlt">method</span> <span class="hlt">applied</span> to computational homogenization</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ptaszny, Jacek</p> <p>2018-01-01</p> <p>In the present work, a fast multipole boundary element <span class="hlt">method</span> (FMBEM) and a parallel computer code for 3D elasticity problem is developed and <span class="hlt">applied</span> to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013CEJE....3..497A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013CEJE....3..497A"><span>Urban drainage control <span class="hlt">applying</span> rational <span class="hlt">method</span> and geographic information technologies</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Aldalur, Beatriz; Campo, Alicia; Fernández, Sandra</p> <p>2013-09-01</p> <p>The objective of this study is to develop a <span class="hlt">method</span> of controlling urban drainages in the town of Ingeniero White motivated by the problems arising as a result of floods, water logging and the combination of southeasterly and high tides. A Rational <span class="hlt">Method</span> was <span class="hlt">applied</span> to control urban watersheds and used tools of Geographic Information Technology (GIT). A Geographic Information System was developed on the basis of 28 panchromatic aerial photographs of 2005. They were georeferenced with control points measured with Global Positioning Systems (basin: 6 km2). Flow rates of basins and sub-basins were calculated and it was verified that the existing open channels have a low slope with the presence of permanent water and generate stagnation of water favored by the presence of trash. It is proposed for the output of storm drains, the use of an existing channel to evacuate the flow. The solution proposed in this work is complemented by the placement of three pumping stations: one on a channel to drain rain water which will allow the drain of the excess water from the lower area where is located the Ingeniero White city and the two others that will drain the excess liquid from the port area.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JCoPh.262..344B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JCoPh.262..344B"><span>A characteristic based volume penalization <span class="hlt">method</span> for general evolution problems <span class="hlt">applied</span> to compressible viscous flows</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brown-Dymkoski, Eric; Kasimov, Nurlybek; Vasilyev, Oleg V.</p> <p>2014-04-01</p> <p>In order to introduce solid obstacles into flows, several different <span class="hlt">methods</span> are used, including volume penalization <span class="hlt">methods</span> which prescribe appropriate boundary conditions by <span class="hlt">applying</span> local forcing to the constitutive equations. One well known <span class="hlt">method</span> is Brinkman penalization, which models solid obstacles as porous media. While it has been adapted for compressible, incompressible, viscous and inviscid flows, it is limited in the types of boundary conditions that it imposes, as are most volume penalization <span class="hlt">methods</span>. Typically, approaches are limited to Dirichlet boundary conditions. In this paper, Brinkman penalization is extended for generalized Neumann and Robin boundary conditions by introducing hyperbolic penalization terms with characteristics pointing inward on solid obstacles. This Characteristic-Based Volume Penalization (CBVP) <span class="hlt">method</span> is a comprehensive approach to conditions on immersed boundaries, providing for homogeneous and inhomogeneous Dirichlet, Neumann, and Robin boundary conditions on hyperbolic and parabolic equations. This CBVP <span class="hlt">method</span> can be used to impose boundary conditions for both integrated and non-integrated variables in a systematic manner that parallels the prescription of exact boundary conditions. Furthermore, the <span class="hlt">method</span> does not depend upon a physical model, as with porous media approach for Brinkman penalization, and is therefore flexible for various physical regimes and general evolutionary equations. Here, the <span class="hlt">method</span> is <span class="hlt">applied</span> to scalar diffusion and to direct numerical simulation of compressible, viscous flows. With the Navier-Stokes equations, both homogeneous and inhomogeneous Neumann boundary conditions are demonstrated through external flow around an adiabatic and heated cylinder. Theoretical and numerical examination shows that the error from penalized Neumann and Robin boundary conditions can be rigorously controlled through an a priori penalization parameter η. The error on a transient boundary is found to converge as O</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5457251','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5457251"><span>Optimizing Injection Molding Parameters of Different Halloysites Type-Reinforced Thermoplastic Polyurethane Nanocomposites via <span class="hlt">Taguchi</span> Complemented with ANOVA</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H.; Nassir, Mohamed H.; Al-Amiery, Ahmed A.</p> <p>2016-01-01</p> <p> coordinating <span class="hlt">Taguchi</span> and ANOVA approaches. Seemingly, mHNTs has shown its very important role in the resulting product. PMID:28774069</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28774069','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28774069"><span>Optimizing Injection Molding Parameters of Different Halloysites Type-Reinforced Thermoplastic Polyurethane Nanocomposites via <span class="hlt">Taguchi</span> Complemented with ANOVA.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H; Nassir, Mohamed H; Al-Amiery, Ahmed A</p> <p>2016-11-22</p> <p> out by coordinating <span class="hlt">Taguchi</span> and ANOVA approaches. Seemingly, m HNTs has shown its very important role in the resulting product.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4789698','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4789698"><span>What <span class="hlt">methods</span> are used to <span class="hlt">apply</span> positive deviance within healthcare organisations? A systematic review</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca</p> <p>2016-01-01</p> <p>Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly <span class="hlt">applied</span> within healthcare organisations, although limited guidance exists and different <span class="hlt">methods</span>, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the <span class="hlt">methods</span> used within them, including the extent to which staff and patients are involved. <span class="hlt">Methods</span> Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently <span class="hlt">applied</span> within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the <span class="hlt">methods</span> used often required extensive resources. Conclusion Further research is required to develop high quality yet practical <span class="hlt">methods</span> which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040115816&hterms=soft+computing&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsoft%2Bcomputing','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040115816&hterms=soft+computing&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsoft%2Bcomputing"><span>Determining flexor-tendon repair techniques via soft computing</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.</p> <p>2001-01-01</p> <p>An SC-based multi-objective decision-making <span class="hlt">method</span> for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11838250','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11838250"><span>Determining flexor-tendon repair techniques via soft computing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Johnson, M; Firoozbakhsh, K; Moniem, M; Jamshidi, M</p> <p>2001-01-01</p> <p>An SC-based multi-objective decision-making <span class="hlt">method</span> for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMPA34A..07G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMPA34A..07G"><span>A Framework <span class="hlt">Applied</span> Three Ways: Responsive <span class="hlt">Methods</span> of Co-Developing and Implementing Community Science Solutions for Local Impact</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Goodwin, M.; Pandya, R.; Udu-gama, N.; Wilkins, S.</p> <p>2017-12-01</p> <p>While one-size-fits all may work for most hats, it rarely does for communities. Research products, <span class="hlt">methods</span> and knowledge may be usable at a local scale, but <span class="hlt">applying</span> them often presents a challenge due to issues like availability, accessibility, awareness, lack of trust, and time. However, in an environment with diminishing federal investment in issues related climate change, natural hazards, and natural resource use and management, the ability of communities to access and leverage science has never been more urgent. Established, yet responsive frameworks and <span class="hlt">methods</span> can help scientists and communities work together to identify and address specific challenges and leverage science to make a local impact. Through the launch of over 50 community science projects since 2013, the Thriving Earth Exchange (TEX) has created a living framework consisting of a set of milestones by which teams of scientists and community leaders navigate the challenges of working together. Central to the framework are context, trust, project planning and refinement, relationship management and community impact. We find that careful and respectful partnership management results in trust and an open exchange of information. Community science partnerships grounded in local priorities result in the development and exchange of stronger decision-relevant tools, resources and knowledge. This presentation will explore three <span class="hlt">methods</span> TEX uses to <span class="hlt">apply</span> its framework to community science partnerships: cohort-based collaboration, online dialogues, and one-on-one consultation. The choice of <span class="hlt">method</span> should be responsive to a community's needs and working style. For example, a community may require customized support, desire the input and support of peers, or require consultation with multiple experts before deciding on a course of action. Knowing and <span class="hlt">applying</span> the <span class="hlt">method</span> of engagement best suited to achieve the community's objectives will ensure that the science is most effectively translated and <span class="hlt">applied</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29628913','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29628913"><span>A Retrospective Review of Microbiological <span class="hlt">Methods</span> <span class="hlt">Applied</span> in Studies Following the Deepwater Horizon Oil Spill.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Shuangfei; Hu, Zhong; Wang, Hui</p> <p>2018-01-01</p> <p>The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico in 2010 resulted in serious damage to local marine and coastal environments. In addition to the physical removal and chemical dispersion of spilled oil, biodegradation by indigenous microorganisms was regarded as the most effective way for cleaning up residual oil. Different microbiological <span class="hlt">methods</span> were <span class="hlt">applied</span> to investigate the changes and responses of bacterial communities after the DWH oil spills. By summarizing and analyzing these microbiological <span class="hlt">methods</span>, giving recommendations and proposing some <span class="hlt">methods</span> that have not been used, this review aims to provide constructive guidelines for microbiological studies after environmental disasters, especially those involving organic pollutants.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5876298','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5876298"><span>A Retrospective Review of Microbiological <span class="hlt">Methods</span> <span class="hlt">Applied</span> in Studies Following the Deepwater Horizon Oil Spill</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Shuangfei; Hu, Zhong; Wang, Hui</p> <p>2018-01-01</p> <p>The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico in 2010 resulted in serious damage to local marine and coastal environments. In addition to the physical removal and chemical dispersion of spilled oil, biodegradation by indigenous microorganisms was regarded as the most effective way for cleaning up residual oil. Different microbiological <span class="hlt">methods</span> were <span class="hlt">applied</span> to investigate the changes and responses of bacterial communities after the DWH oil spills. By summarizing and analyzing these microbiological <span class="hlt">methods</span>, giving recommendations and proposing some <span class="hlt">methods</span> that have not been used, this review aims to provide constructive guidelines for microbiological studies after environmental disasters, especially those involving organic pollutants. PMID:29628913</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21207106','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21207106"><span>A reflective lens: <span class="hlt">applying</span> critical systems thinking and visual <span class="hlt">methods</span> to ecohealth research.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cleland, Deborah; Wyborn, Carina</p> <p>2010-12-01</p> <p>Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual <span class="hlt">methods</span>--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual <span class="hlt">methods</span> can be usefully <span class="hlt">applied</span> within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual <span class="hlt">methods</span> in integrated research.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2108C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2108C"><span>Optimization of Robotic Spray Painting process Parameters using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar</p> <p>2018-02-01</p> <p>Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24089751','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24089751"><span><span class="hlt">Applying</span> electric field to charged and polar particles between metallic plates: extension of the Ewald <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takae, Kyohei; Onuki, Akira</p> <p>2013-09-28</p> <p>We develop an efficient Ewald <span class="hlt">method</span> of molecular dynamics simulation for calculating the electrostatic interactions among charged and polar particles between parallel metallic plates, where we may <span class="hlt">apply</span> an electric field with an arbitrary size. We use the fact that the potential from the surface charges is equivalent to the sum of those from image charges and dipoles located outside the cell. We present simulation results on boundary effects of charged and polar fluids, formation of ionic crystals, and formation of dipole chains, where the <span class="hlt">applied</span> field and the image interaction are crucial. For polar fluids, we find a large deviation of the classical Lorentz-field relation between the local field and the <span class="hlt">applied</span> field due to pair correlations along the <span class="hlt">applied</span> field. As general aspects, we clarify the difference between the potential-fixed and the charge-fixed boundary conditions and examine the relationship between the discrete particle description and the continuum electrostatics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3146952','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3146952"><span>Simulation <span class="hlt">methods</span> to estimate design power: an overview for <span class="hlt">applied</span> research</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2011-01-01</p> <p>Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. <span class="hlt">Methods</span> We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those <span class="hlt">methods</span> to accommodate arbitrarily complex designs. The <span class="hlt">method</span> is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, <span class="hlt">applied</span> researchers. We illustrate the <span class="hlt">method</span> using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation <span class="hlt">methods</span> offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=Asian+AND+theatre&id=EJ783549','ERIC'); return false;" href="https://eric.ed.gov/?q=Asian+AND+theatre&id=EJ783549"><span>The Intertextual <span class="hlt">Method</span> for Art Education <span class="hlt">Applied</span> in Japanese Paper Theatre--A Study on Discovering Intercultural Differences</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Paatela-Nieminen, Martina</p> <p>2008-01-01</p> <p>In art education we need <span class="hlt">methods</span> for studying works of art and visual culture interculturally because there are many multicultural art classes and little consensus as to how to interpret art in different cultures. In this article my central aim was to <span class="hlt">apply</span> the intertextual <span class="hlt">method</span> that I developed in my doctoral thesis for Western art education to…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27865731','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27865731"><span>Physicochemical characterization, modelling and optimization of ultrasono-assisted acid pretreatment of two Pennisetum sp. using <span class="hlt">Taguchi</span> and artificial neural networking for enhanced delignification.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mohapatra, Sonali; Dandapat, Snigdha Jyotsna; Thatoi, Hrudayanath</p> <p>2017-02-01</p> <p>Acid as well as ultrasono-assisted acid pretreatment of lignocellulosic biomass of two Pennisetum sp.; Denanath grass (DG) and Hybrid Napier grass (HNG) have been investigated for enhanced delignification and maximum exposure of cellulose for production of bioethanol. Screening of pretreatment with different acids such as H 2 SO 4 , HCl, H 3 PO 4 and H 2 NO 3 were optimized for different temperature, soaking time and acid concentrations using <span class="hlt">Taguchi</span> orthogonal array and the data obtained were statistically validated using artificial neural networking. HCl was found to be the most effective acid for pretreatment of both the Pennisetum sp. The optimized conditions of HCl pretreatment were acid concentration of 1% and 1.5%, soaking time 130 and 50 min and temperature 121 °C and 110 °C which yielded maximum delignification of 33.0% and 33.8% for DG and HNG respectively. Further ultrosono-assisted HCl pretreatment with a power supply of 100 W, temperature of 353 K, and duty cycle of 70% has resulted in significantly higher delignification of 80.4% and 82.1% for both DG and HNG respectively than that of acid pretreatment. Investigation using SEM, FTIR and autofloresence microscopy for both acid and ultrasono-assisted acid pretreatment lignocellulosic biomass revealed conformational changes of pretreated lignocellulosic biomass with decreased lignin content and increased exposure of cellulose, with greater effectiveness in case of ultrasono assisted acid pretreatment condition. Copyright © 2016. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008EJASP2008..114T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008EJASP2008..114T"><span>Analytical Plug-In <span class="hlt">Method</span> for Kernel Density Estimator <span class="hlt">Applied</span> to Genetic Neutrality Study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Troudi, Molka; Alimi, Adel M.; Saoudi, Samir</p> <p>2008-12-01</p> <p>The plug-in <span class="hlt">method</span> enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in <span class="hlt">method</span> is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are <span class="hlt">applied</span> to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24000628','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24000628"><span>[New <span class="hlt">methods</span> of treatment <span class="hlt">applied</span> in the hospital of Sochi during the Great Patriotic War].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Artiukhov, S A</p> <p>2013-05-01</p> <p>During the Great Patriotic War 1941-1945 Sochi was turned into the largest hospital base in the south of the USSR. All told, 335 thousand wonded and seriously ill soldiers were treated in the hospitals of Sochi. During the war physicians <span class="hlt">applied</span> many new, including, early unknown medical <span class="hlt">methods</span> of treatment. Poor provision with medical equipment, instruments, bandages and medicines was made up for using of local resources. Adoption of new treatment <span class="hlt">methods</span> based on the use of local medicines allowed the Sochi's physicians to save many lives during the war.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3026998','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3026998"><span>Enhancement of 2,3-Butanediol Production by Klebsiella oxytoca PTCC 1402</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Anvari, Maesomeh; Safari Motlagh, Mohammad Reza</p> <p>2011-01-01</p> <p>Optimal operating parameters of 2,3-Butanediol production using Klebsiella oxytoca under submerged culture conditions are determined by using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The effect of different factors including medium composition, pH, temperature, mixing intensity, and inoculum size on 2,3-butanediol production was analyzed using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> in three levels. Based on these analyses the optimum concentrations of glucose, acetic acid, and succinic acid were found to be 6, 0.5, and 1.0 (% w/v), respectively. Furthermore, optimum values for temperature, inoculum size, pH, and the shaking speed were determined as 37°C, 8 (g/L), 6.1, and 150 rpm, respectively. The optimal combinations of factors obtained from the proposed DOE methodology was further validated by conducting fermentation experiments and the obtained results revealed an enhanced 2,3-Butanediol yield of 44%. PMID:21318172</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1007a2031M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1007a2031M"><span>The robust design for improving crude palm oil quality in Indonesian Mill</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maretia Benu, Siti; Sinulingga, Sukaria; Matondang, Nazaruddin; Budiman, Irwan</p> <p>2018-04-01</p> <p>This research was conducted in palm oil mill in Sumatra Utara Province, Indonesia. Currently, the main product of this mill is Crude Palm Oil (CPO) and hasn’t met the expected standard quality. CPO is the raw material for many fat derivative products. The generally stipulated quality criteria are dirt count, free fatty acid, and moisture of CPO. The aim of this study is to obtain the optimal setting for factor’s affect the quality of CPO. The optimal setting will result in an improvement of product’s quality. In this research, Experimental Design with <span class="hlt">Taguchi</span> <span class="hlt">Method</span> is used. Steps of this <span class="hlt">method</span> are identified influence factors, select the orthogonal array, processed data using ANOVA test and signal to noise ratio, and confirmed the research using Quality Loss Function. The result of this study using <span class="hlt">Taguchi</span> <span class="hlt">Method</span> is to suggest to set fruit maturity at 75.4-86.9%, digester temperature at 95°C and press at 21 Ampere to reduce quality deviation until 42.42%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1175708','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1175708"><span><span class="hlt">Method</span> for <span class="hlt">applying</span> a high-temperature bond coat on a metal substrate, and related compositions and articles</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Hasz, Wayne Charles; Sangeeta, D</p> <p>2006-04-18</p> <p>A <span class="hlt">method</span> for <span class="hlt">applying</span> a bond coat on a metal-based substrate is described. A slurry which contains braze material and a volatile component is deposited on the substrate. The slurry can also include bond coat material. Alternatively, the bond coat material can be <span class="hlt">applied</span> afterward, in solid form or in the form of a second slurry. The slurry and bond coat are then dried and fused to the substrate. A repair technique using this slurry is also described, along with related compositions and articles.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/874954','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/874954"><span><span class="hlt">Method</span> for <span class="hlt">applying</span> a high-temperature bond coat on a metal substrate, and related compositions and articles</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Hasz, Wayne Charles; Sangeeta, D</p> <p>2002-01-01</p> <p>A <span class="hlt">method</span> for <span class="hlt">applying</span> a bond coat on a metal-based substrate is described. A slurry which contains braze material and a volatile component is deposited on the substrate. The slurry can also include bond coat material. Alternatively, the bond coat material can be <span class="hlt">applied</span> afterward, in solid form or in the form of a second slurry. The slurry and bond coat are then dried and fused to the substrate. A repair technique using this slurry is also described, along with related compositions and articles.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21689447','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21689447"><span>Simulation <span class="hlt">methods</span> to estimate design power: an overview for <span class="hlt">applied</span> research.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E</p> <p>2011-06-20</p> <p>Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those <span class="hlt">methods</span> to accommodate arbitrarily complex designs. The <span class="hlt">method</span> is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, <span class="hlt">applied</span> researchers. We illustrate the <span class="hlt">method</span> using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation <span class="hlt">methods</span> offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGS....19..157B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGS....19..157B"><span>Geometric <span class="hlt">methods</span> for estimating representative sidewalk widths <span class="hlt">applied</span> to Vienna's streetscape surfaces database</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brezina, Tadej; Graser, Anita; Leth, Ulrich</p> <p>2017-04-01</p> <p>Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel <span class="hlt">methods</span> for estimating representative sidewalk widths and <span class="hlt">applies</span> them to the official Viennese streetscape surface database. The first two <span class="hlt">methods</span> use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third <span class="hlt">method</span> utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between <span class="hlt">methods</span> as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these <span class="hlt">methods</span> for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940031363','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940031363"><span><span class="hlt">Applying</span> transfer matrix <span class="hlt">method</span> to the estimation of the modal characteristics of the NASA Mini-Mass Truss</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Shen, Ji-Yao; Taylor, Lawrence W., Jr.</p> <p>1994-01-01</p> <p>It is beneficial to use a distributed parameter model for large space structures because the approach minimizes the number of model parameters. Holzer's transfer matrix <span class="hlt">method</span> provides a useful means to simplify and standardize the procedure for solving the system of partial differential equations. Any large space structures can be broken down into sub-structures with simple elastic and dynamical properties. For each single element, such as beam, tether, or rigid body, we can derive the corresponding transfer matrix. Combining these elements' matrices enables the solution of the global system equations. The characteristics equation can then be formed by satisfying the appropriate boundary conditions. Then natural frequencies and mode shapes can be determined by searching the roots of the characteristic equation at frequencies within the range of interest. This paper <span class="hlt">applies</span> this methodology, and the maximum likelihood estimation <span class="hlt">method</span>, to refine the modal characteristics of the NASA Mini-Mast Truss by successively matching the theoretical response to the test data of the truss. The <span class="hlt">method</span> is being <span class="hlt">applied</span> to more complex configurations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790006480','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790006480"><span>A comparative study of Conroy and Monte Carlo <span class="hlt">methods</span> <span class="hlt">applied</span> to multiple quadratures and multiple scattering</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Deepak, A.; Fluellen, A.</p> <p>1978-01-01</p> <p>An efficient numerical <span class="hlt">method</span> of multiple quadratures, the Conroy <span class="hlt">method</span>, is <span class="hlt">applied</span> to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the <span class="hlt">method</span> is given and comparisons are drawn with the more familiar Monte Carlo <span class="hlt">method</span>. Both <span class="hlt">methods</span> are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy <span class="hlt">method</span>, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The <span class="hlt">methods</span> are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ResPh...7..813B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ResPh...7..813B"><span><span class="hlt">Applying</span> the Network Simulation <span class="hlt">Method</span> for testing chaos in a resistively and capacitively shunted Josephson junction model</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel</p> <p></p> <p>In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation <span class="hlt">Method</span>. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be <span class="hlt">applied</span> to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to <span class="hlt">apply</span> the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been <span class="hlt">applied</span> for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19446483','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19446483"><span>[Work organisation improvement <span class="hlt">methods</span> <span class="hlt">applied</span> to activities of Blood Transfusion Establishments (BTE): Lean Manufacturing, VSM, 5S].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bertholey, F; Bourniquel, P; Rivery, E; Coudurier, N; Follea, G</p> <p>2009-05-01</p> <p>Continuous improvement of efficiency as well as new expectations from customers (quality and safety of blood products) and employees (working conditions) imply constant efforts in Blood Transfusion Establishments (BTE) to improve work organisations. The Lean <span class="hlt">method</span> (from "Lean" meaning "thin") aims at identifying wastages in the process (overproduction, waiting, over-processing, inventory, transport, motion) and then reducing them in establishing a mapping of value chain (Value Stream Mapping). It consists in determining the added value of each step of the process from a customer perspective. Lean also consists in standardizing operations while implicating and responsabilizing all collaborators. The name 5S comes from the first letter of five operations of a Japanese management technique: to clear, rank, keep clean, standardize, make durable. The 5S <span class="hlt">method</span> leads to develop the team working inducing an evolution of the way in the management is performed. The Lean VSM <span class="hlt">method</span> has been <span class="hlt">applied</span> to blood processing (component laboratory) in the Pays de la Loire BTE. The Lean 5S <span class="hlt">method</span> has been <span class="hlt">applied</span> to blood processing, quality control, purchasing, warehouse, human resources and quality assurance in the Rhône-Alpes BTE. The experience returns from both BTE shows that these <span class="hlt">methods</span> allowed improving: (1) the processes and working conditions from a quality perspective, (2) the staff satisfaction, (3) the efficiency. These experiences, implemented in two BTE for different processes, confirm the applicability and usefulness of these <span class="hlt">methods</span> to improve working organisations in BTE.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29600446','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29600446"><span>A Simple and Useful <span class="hlt">Method</span> to <span class="hlt">Apply</span> Exogenous NO Gas to Plant Systems: Bell Pepper Fruits as a Model.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Palma, José M; Ruiz, Carmelo; Corpas, Francisco J</p> <p>2018-01-01</p> <p>Nitric oxide (NO) is involved many physiological plant processes, including germination, growth and development of roots, flower setting and development, senescence, and fruit ripening. In the latter physiological process, NO has been reported to play an opposite role to ethylene. Thus, treatment of fruits with NO may lead to delay ripening independently of whether they are climacteric or nonclimacteric. In many cases different <span class="hlt">methods</span> have been reported to <span class="hlt">apply</span> NO to plant systems involving sodium nitroprusside, NONOates, DETANO, or GSNO to investigate physiological and molecular consequences. In this chapter a <span class="hlt">method</span> to treat plant materials with NO is provided using bell pepper fruits as a model. This <span class="hlt">method</span> is cheap, free of side effects, and easy to <span class="hlt">apply</span> since it only requires common chemicals and tools available in any biology laboratory.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19800017889','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19800017889"><span>Cork-resin ablative insulation for complex surfaces and <span class="hlt">method</span> for <span class="hlt">applying</span> the same</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)</p> <p>1980-01-01</p> <p>A <span class="hlt">method</span> of <span class="hlt">applying</span> cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050199072','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050199072"><span>Verification, Validation, and Solution Quality in Computational Physics: CFD <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Ice Sheet Physics</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Thompson, David E.</p> <p>2005-01-01</p> <p>Procedures and <span class="hlt">methods</span> for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those <span class="hlt">methods</span>, and how they might be <span class="hlt">applied</span> to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these <span class="hlt">methods</span> to glacier modeling are discussed. After establishing sources of uncertainty and <span class="hlt">methods</span> for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA589393','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA589393"><span>Application of LCR Waves to Inspect Aircraft Structures</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-01-01</p> <p>Mechanical Engineering (COBEM 2011). Proceedings of COBEM, 2011. Natal, RN, Brasil Analysis of the behavior of Lcr Waves propagating in Steel bars using...<span class="hlt">Taguchi</span> <span class="hlt">Method</span>. 21 th International Congress of Mechanical Engineering (COBEM 2011). Proceedings of COBEM, 2011. Natal, RN, Brasil . Application</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JaJAP..57gLF08M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JaJAP..57gLF08M"><span>Accuracy improvement in measurement of arterial wall elasticity by <span class="hlt">applying</span> pulse inversion to phased-tracking <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyachi, Yukiya; Arakawa, Mototaka; Kanai, Hiroshi</p> <p>2018-07-01</p> <p>In our studies on ultrasonic elasticity assessment, minute change in the thickness of the arterial wall was measured by the phased-tracking <span class="hlt">method</span>. However, most images in carotid artery examinations contain multiple-reflection noise, making it difficult to evaluate arterial wall elasticity precisely. In the present study, a modified phased-tracking <span class="hlt">method</span> using the pulse inversion <span class="hlt">method</span> was examined to reduce the influence of the multiple-reflection noise. Moreover, aliasing in the harmonic components was corrected by the fundamental components. The conventional and proposed <span class="hlt">methods</span> were <span class="hlt">applied</span> to a pulsated tube phantom mimicking the arterial wall. For the conventional <span class="hlt">method</span>, the elasticity was 298 kPa without multiple-reflection noise and 353 kPa with multiple-reflection noise on the posterior wall. That of the proposed <span class="hlt">method</span> was 302 kPa without multiple-reflection noise and 297 kPa with multiple-reflection noise on the posterior wall. Therefore, the proposed <span class="hlt">method</span> was very robust against multiple-reflection noise.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JPhCS.516a2023B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JPhCS.516a2023B"><span>Non-destructive research <span class="hlt">methods</span> <span class="hlt">applied</span> on materials for the new generation of nuclear reactors</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bartošová, I.; Slugeň, V.; Veterníková, J.; Sojak, S.; Petriska, M.; Bouhaddane, A.</p> <p>2014-06-01</p> <p>The paper is aimed on non-destructive experimental techniques <span class="hlt">applied</span> on materials for the new generation of nuclear reactors (GEN IV). With the development of these reactors, also materials have to be developed in order to guarantee high standard properties needed for construction. These properties are high temperature resistance, radiation resistance and resistance to other negative effects. Nevertheless the changes in their mechanical properties should be only minimal. Materials, that fulfil these requirements, are analysed in this work. The ferritic-martensitic (FM) steels and ODS steels are studied in details. Microstructural defects, which can occur in structural materials and can be also accumulated during irradiation due to neutron flux or alpha, beta and gamma radiation, were analysed using different spectroscopic <span class="hlt">methods</span> as positron annihilation spectroscopy and Barkhausen noise, which were <span class="hlt">applied</span> for measurements of three different FM steels (T91, P91 and E97) as well as one ODS steel (ODS Eurofer).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22889858','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22889858"><span><span class="hlt">Applying</span> sociodramatic <span class="hlt">methods</span> in teaching transition to palliative care.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baile, Walter F; Walters, Rebecca</p> <p>2013-03-01</p> <p>We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation <span class="hlt">method</span> was <span class="hlt">applied</span> in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992NIMPB..68..125P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992NIMPB..68..125P"><span>The fundamental parameter <span class="hlt">method</span> <span class="hlt">applied</span> to X-ray fluorescence analysis with synchrotron radiation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.</p> <p>1992-05-01</p> <p>Quantitative X-ray fluorescence analysis <span class="hlt">applying</span> the fundamental parameter <span class="hlt">method</span> is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a <span class="hlt">method</span> which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JThSc..27...89N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JThSc..27...89N"><span>Optimization of performance and emission characteristics of PPCCI engine fuelled with ethanol and diesel blends using grey-<span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Natarajan, S.; Pitchandi, K.; Mahalakshmi, N. V.</p> <p>2018-02-01</p> <p>The performance and emission characteristics of a PPCCI engine fuelled with ethanol and diesel blends were carried out on a single cylinder air cooled CI engine. In order to achieve the optimal process response with a limited number of experimental cycles, multi objective grey relational analysis had been <span class="hlt">applied</span> for solving a multiple response optimization problem. Using grey relational grade and signal-to-noise ratio as a performance index, a combination of input parameters was prefigured so as to achieve optimum response characteristics. It was observed that 20% premixed ratio of blend was most suitable for use in a PPCCI engine without significantly affecting the engine performance and emissions characteristics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28989596','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28989596"><span>The Automated Root Exudate System (ARES): a <span class="hlt">method</span> to <span class="hlt">apply</span> solutes at regular intervals to soils in the field.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lopez-Sangil, Luis; George, Charles; Medina-Barcenas, Eduardo; Birkett, Ali J; Baxendale, Catherine; Bréchet, Laëtitia M; Estradera-Gumbau, Eduard; Sayer, Emma J</p> <p>2017-09-01</p> <p>Root exudation is a key component of nutrient and carbon dynamics in terrestrial ecosystems. Exudation rates vary widely by plant species and environmental conditions, but our understanding of how root exudates affect soil functioning is incomplete, in part because there are few viable <span class="hlt">methods</span> to manipulate root exudates in situ . To address this, we devised the Automated Root Exudate System (ARES), which simulates increased root exudation by <span class="hlt">applying</span> small amounts of labile solutes at regular intervals in the field.The ARES is a gravity-fed drip irrigation system comprising a reservoir bottle connected via a timer to a micro-hose irrigation grid covering c . 1 m 2 ; 24 drip-tips are inserted into the soil to 4-cm depth to <span class="hlt">apply</span> solutions into the rooting zone. We installed two ARES subplots within existing litter removal and control plots in a temperate deciduous woodland. We <span class="hlt">applied</span> either an artificial root exudate solution (RE) or a procedural control solution (CP) to each subplot for 1 min day -1 during two growing seasons. To investigate the influence of root exudation on soil carbon dynamics, we measured soil respiration monthly and soil microbial biomass at the end of each growing season.The ARES <span class="hlt">applied</span> the solutions at a rate of c . 2 L m -2  week -1 without significantly increasing soil water content. The application of RE solution had a clear effect on soil carbon dynamics, but the response varied by litter treatment. Across two growing seasons, soil respiration was 25% higher in RE compared to CP subplots in the litter removal treatment, but not in the control plots. By contrast, we observed a significant increase in microbial biomass carbon (33%) and nitrogen (26%) in RE subplots in the control litter treatment.The ARES is an effective, low-cost <span class="hlt">method</span> to <span class="hlt">apply</span> experimental solutions directly into the rooting zone in the field. The installation of the systems entails minimal disturbance to the soil and little maintenance is required. Although</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26042998','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26042998"><span>Resampling <span class="hlt">method</span> for <span class="hlt">applying</span> density-dependent habitat selection theory to wildlife surveys.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel</p> <p>2015-01-01</p> <p>Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling <span class="hlt">method</span> that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The <span class="hlt">method</span> consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We <span class="hlt">applied</span> this <span class="hlt">method</span> to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust <span class="hlt">method</span> that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The <span class="hlt">method</span> is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhPro..78..357I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhPro..78..357I"><span>Additive Manufacturing in Production: A Study Case <span class="hlt">Applying</span> Technical Requirements</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ituarte, Iñigo Flores; Coatanea, Eric; Salmi, Mika; Tuomi, Jukka; Partanen, Jouni</p> <p></p> <p>Additive manufacturing (AM) is expanding the manufacturing capabilities. However, quality of AM produced parts is dependent on a number of machine, geometry and process parameters. The variability of these parameters affects the manufacturing drastically and therefore standardized processes and harmonized methodologies need to be developed to characterize the technology for end use applications and enable the technology for manufacturing. This research proposes a composite methodology integrating <span class="hlt">Taguchi</span> Design of Experiments, multi-objective optimization and statistical process control, to optimize the manufacturing process and fulfil multiple requirements imposed to an arbitrary geometry. The proposed methodology aims to characterize AM technology depending upon manufacturing process variables as well as to perform a comparative assessment of three AM technologies (Selective Laser Sintering, Laser Stereolithography and Polyjet). Results indicate that only one machine, laser-based Stereolithography, was feasible to fulfil simultaneously macro and micro level geometrical requirements but mechanical properties were not at required level. Future research will study a single AM system at the time to characterize AM machine technical capabilities and stimulate pre-normative initiatives of the technology for end use applications.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1034525','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1034525"><span>Parallel High Order Accuracy <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jan Hesthaven</p> <p>2012-02-06</p> <p>Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of <span class="hlt">Applied</span> Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted themore » project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational <span class="hlt">methods</span> for the solution of hyperbolic equations with application to problems with strong shocks. While the <span class="hlt">methods</span> are general, emphasis is on applications to gas dynamics with strong shocks.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25676967','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25676967"><span>Valuing national effects of digital health investments: an <span class="hlt">applied</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad</p> <p>2015-01-01</p> <p>This paper describes an approach which has been <span class="hlt">applied</span> to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has <span class="hlt">applied</span> in four studies since 2008.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4401151','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4401151"><span>A <span class="hlt">Method</span> for Selecting Structure-switching Aptamers <span class="hlt">Applied</span> to a Colorimetric Gold Nanoparticle Assay</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Martin, Jennifer A.; Smith, Joshua E.; Warren, Mercedes; Chávez, Jorge L.; Hagen, Joshua A.; Kelley-Loughnane, Nancy</p> <p>2015-01-01</p> <p>Small molecules provide rich targets for biosensing applications due to their physiological implications as biomarkers of various aspects of human health and performance. Nucleic acid aptamers have been increasingly <span class="hlt">applied</span> as recognition elements on biosensor platforms, but selecting aptamers toward small molecule targets requires special design considerations. This work describes modification and critical steps of a <span class="hlt">method</span> designed to select structure-switching aptamers to small molecule targets. Binding sequences from a DNA library hybridized to complementary DNA capture probes on magnetic beads are separated from nonbinders via a target-induced change in conformation. This <span class="hlt">method</span> is advantageous because sequences binding the support matrix (beads) will not be further amplified, and it does not require immobilization of the target molecule. However, the melting temperature of the capture probe and library is kept at or slightly above RT, such that sequences that dehybridize based on thermodynamics will also be present in the supernatant solution. This effectively limits the partitioning efficiency (ability to separate target binding sequences from nonbinders), and therefore many selection rounds will be required to remove background sequences. The reported <span class="hlt">method</span> differs from previous structure-switching aptamer selections due to implementation of negative selection steps, simplified enrichment monitoring, and extension of the length of the capture probe following selection enrichment to provide enhanced stringency. The selected structure-switching aptamers are advantageous in a gold nanoparticle assay platform that reports the presence of a target molecule by the conformational change of the aptamer. The gold nanoparticle assay was <span class="hlt">applied</span> because it provides a simple, rapid colorimetric readout that is beneficial in a clinical or deployed environment. Design and optimization considerations are presented for the assay as proof-of-principle work in buffer to</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140010884','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140010884"><span>Efficient Implementation of the Invariant Imbedding T-Matrix <span class="hlt">Method</span> and the Separation of Variables <span class="hlt">Method</span> <span class="hlt">Applied</span> to Large Nonspherical Inhomogeneous Particles</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.</p> <p>2012-01-01</p> <p>Three terms, ''Waterman's T-matrix <span class="hlt">method</span>'', ''extended boundary condition <span class="hlt">method</span> (EBCM)'', and ''null field <span class="hlt">method</span>'', have been interchangeable in the literature to indicate a <span class="hlt">method</span> based on surface integral equations to calculate the T-matrix. Unlike the previous <span class="hlt">method</span>, the invariant imbedding <span class="hlt">method</span> (IIM) calculates the T-matrix by the use of a volume integral equation. In addition, the standard separation of variables <span class="hlt">method</span> (SOV) can be <span class="hlt">applied</span> to compute the T-matrix of a sphere centered at the origin of the coordinate system and having a maximal radius such that the sphere remains inscribed within a nonspherical particle. This study explores the feasibility of a numerical combination of the IIM and the SOV, hereafter referred to as the IIMþSOV <span class="hlt">method</span>, for computing the single-scattering properties of nonspherical dielectric particles, which are, in general, inhomogeneous. The IIMþSOV <span class="hlt">method</span> is shown to be capable of solving light-scattering problems for large nonspherical particles where the standard EBCM fails to converge. The IIMþSOV <span class="hlt">method</span> is flexible and applicable to inhomogeneous particles and aggregated nonspherical particles (overlapped circumscribed spheres) representing a challenge to the standard superposition T-matrix <span class="hlt">method</span>. The IIMþSOV computational program, developed in this study, is validated against EBCM simulated spheroid and cylinder cases with excellent numerical agreement (up to four decimal places). In addition, solutions for cylinders with large aspect ratios, inhomogeneous particles, and two-particle systems are compared with results from discrete dipole approximation (DDA) computations, and comparisons with the improved geometric-optics <span class="hlt">method</span> (IGOM) are found to be quite encouraging.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17601025S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17601025S"><span>How to <span class="hlt">apply</span> the optimal estimation <span class="hlt">method</span> to your lidar measurements for improved retrievals of temperature and composition</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.</p> <p>2018-04-01</p> <p>The optimal estimation <span class="hlt">method</span> (OEM) has a long history of use in passive remote sensing, but has only recently been <span class="hlt">applied</span> to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first <span class="hlt">applying</span> instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20180001205','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20180001205"><span>An Analysis of the Optimal Control Modification <span class="hlt">Method</span> <span class="hlt">Applied</span> to Flutter Suppression</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Drew, Michael; Nguyen, Nhan T.; Hashemi, Kelley E.; Ting, Eric; Chaparro, Daniel</p> <p>2017-01-01</p> <p>Unlike basic Model Reference Adaptive Control (MRAC)l, Optimal Control Modification (OCM) has been shown to be a promising MRAC modification with robustness and analytical properties not present in other adaptive control <span class="hlt">methods</span>. This paper presents an analysis of the OCM <span class="hlt">method</span>, and how the asymptotic property of OCM is useful for analyzing and tuning the controller. We begin with a Lyapunov stability proof of an OCM controller having two adaptive gain terms, then the less conservative and easily analyzed OCM asymptotic property is presented. Two numerical examples are used to show how this property can accurately predict steady state stability and quantitative robustness in the presence of time delay, and relative to linear plant perturbations, and nominal Loop Transfer Recovery (LTR) tuning. The asymptotic property of the OCM controller is then used as an aid in tuning the controller <span class="hlt">applied</span> to a large scale aeroservoelastic longitudinal aircraft model for flutter suppression. Control with OCM adaptive augmentation is shown to improve performance over that of the nominal non-adaptive controller when significant disparities exist between the controller/observer model and the true plant model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.3913..104R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.3913..104R"><span>Laser scattering <span class="hlt">method</span> <span class="hlt">applied</span> to determine the concentration of alfa 1-antitrypsin</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Riquelme, Bibiana D.; Foresto, Patricia; Valverde, Juana R.; Rasia, Rodolfo J.</p> <p>2000-04-01</p> <p>An optical <span class="hlt">method</span> has been developed to find (alpha) 1- antitrypsin unknown concentrations in human serum samples. This <span class="hlt">method</span> <span class="hlt">applies</span> light scattering properties exhibited by initially formed enzyme-inhibitor complexes and uses the curves of aggregation kinetics. It is independent of molecular hydrodynamics. Theoretical approaches showed that scattering properties of transient complexes obey the Rayleigh-Debie conditions. Experiments were performed on the Trypsin/(alpha) 1-antitrypsin system. Measurements were performed in newborn, adult and pregnant sera containing (alpha) 1-antitrypsin in the Trypsin excess region. The solution was excite by a He-Ne laser beam. SO, the particles formed during the reaction are scattering centers for the interacting light. The intensity of the scattered light at 90 degrees from incident beam depends on the nature of those scattering centers. Th rate of increase in scattered intensity depends on the variation in size and shape of the scatterers, being independent of its original size. Peak values of the first derivative linearly correlate with the concentration of (alpha) 1-antitrypsin originally present in the sample. Results are displayed 5 minutes after the initiation of the experimental process. Such speed is of great importance in the immuno-biochemistry determinations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1174694','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1174694"><span><span class="hlt">Method</span> for <span class="hlt">applying</span> a photoresist layer to a substrate having a preexisting topology</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Morales, Alfredo M.; Gonzales, Marcela</p> <p>2004-01-20</p> <p>The present invention describes a <span class="hlt">method</span> for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the <span class="hlt">applied</span> photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4501829','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4501829"><span>Estimating the Impacts of Local Policy Innovation: The Synthetic Control <span class="hlt">Method</span> <span class="hlt">Applied</span> to Tropical Deforestation</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander</p> <p>2015-01-01</p> <p>Quasi-experimental <span class="hlt">methods</span> increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These <span class="hlt">methods</span> generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative <span class="hlt">methods</span>, which rely on analysts’ selection of best case comparisons. The synthetic control <span class="hlt">method</span> (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then <span class="hlt">apply</span> it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we <span class="hlt">apply</span> SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26173108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26173108"><span>Estimating the Impacts of Local Policy Innovation: The Synthetic Control <span class="hlt">Method</span> <span class="hlt">Applied</span> to Tropical Deforestation.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander</p> <p>2015-01-01</p> <p>Quasi-experimental <span class="hlt">methods</span> increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These <span class="hlt">methods</span> generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative <span class="hlt">methods</span>, which rely on analysts' selection of best case comparisons. The synthetic control <span class="hlt">method</span> (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then <span class="hlt">apply</span> it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we <span class="hlt">apply</span> SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16130825','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16130825"><span>Neural net <span class="hlt">applied</span> to anthropological material: a <span class="hlt">methodical</span> study on the human nasal skeleton.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich</p> <p>2005-07-01</p> <p>A new information processing <span class="hlt">method</span>, an artificial neural net, was <span class="hlt">applied</span> to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were <span class="hlt">applied</span> to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable <span class="hlt">method</span> for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED360941.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED360941.pdf"><span>Total Quality Management: Statistics and Graphics III - Experimental Design and <span class="hlt">Taguchi</span> <span class="hlt">Methods</span>. AIR 1993 Annual Forum Paper.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Schwabe, Robert A.</p> <p></p> <p>Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical <span class="hlt">methods</span> used for TQM. This paper, the third in the series, first discusses some of the…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5087340','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5087340"><span>An Online Gravity Modeling <span class="hlt">Method</span> <span class="hlt">Applied</span> for High Precision Free-INS</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao</p> <p>2016-01-01</p> <p>For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed <span class="hlt">method</span> outperforms traditional gravity models <span class="hlt">applied</span> for high precision free-INS. PMID:27669261</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27669261','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27669261"><span>An Online Gravity Modeling <span class="hlt">Method</span> <span class="hlt">Applied</span> for High Precision Free-INS.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao</p> <p>2016-09-23</p> <p>For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed <span class="hlt">method</span> outperforms traditional gravity models <span class="hlt">applied</span> for high precision free-INS.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26901203','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26901203"><span>A Rapid Coordinate Transformation <span class="hlt">Method</span> <span class="hlt">Applied</span> in Industrial Robot Calibration Based on Characteristic Line Coincidence.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia</p> <p>2016-02-18</p> <p>Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely <span class="hlt">applied</span> <span class="hlt">methods</span> of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation <span class="hlt">method</span> is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed <span class="hlt">method</span> with other <span class="hlt">methods</span>. The results show that the proposed <span class="hlt">method</span> has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed <span class="hlt">method</span> and robot calibration.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801615','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4801615"><span>A Rapid Coordinate Transformation <span class="hlt">Method</span> <span class="hlt">Applied</span> in Industrial Robot Calibration Based on Characteristic Line Coincidence</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia</p> <p>2016-01-01</p> <p>Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely <span class="hlt">applied</span> <span class="hlt">methods</span> of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation <span class="hlt">method</span> is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed <span class="hlt">method</span> with other <span class="hlt">methods</span>. The results show that the proposed <span class="hlt">method</span> has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed <span class="hlt">method</span> and robot calibration. PMID:26901203</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19750063123&hterms=test+hypothesis&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dtest%2Bhypothesis','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19750063123&hterms=test+hypothesis&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dtest%2Bhypothesis"><span>Two self-test <span class="hlt">methods</span> <span class="hlt">applied</span> to an inertial system problem. [estimating gyroscope and accelerometer bias</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Willsky, A. S.; Deyst, J. J.; Crawford, B. S.</p> <p>1975-01-01</p> <p>The paper describes two self-test procedures <span class="hlt">applied</span> to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR <span class="hlt">method</span> does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis <span class="hlt">method</span> developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/861605','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/861605"><span>Balancing a U-Shaped Assembly Line by <span class="hlt">Applying</span> Nested Partitions <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bhagwat, Nikhil V.</p> <p>2005-01-01</p> <p>In this study, we <span class="hlt">applied</span> the Nested Partitions <span class="hlt">method</span> to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions <span class="hlt">method</span> provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment ofmore » tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions <span class="hlt">method</span> in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO <span class="hlt">method</span> is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AGUSM.S33B..01V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AGUSM.S33B..01V"><span>Microtremors study <span class="hlt">applying</span> the SPAC <span class="hlt">method</span> in Colima state, Mexico.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vázquez Rosas, R.; Aguirre González, J.; Mijares Arellano, H.</p> <p>2007-05-01</p> <p>One of the main parts of seismic risk studies is to determine the site effect. This can be estimated by means of the microtremors measurements. From the H/V spectral ratio (Nakamura, 1989), the predominant period of the site can be estimated. Although the predominant period by itself can not represent the site effect in a wide range of frequencies and doesn't provide information of the stratigraphy. The SPAC <span class="hlt">method</span> (Spatial Auto-Correlation <span class="hlt">Method</span>, Aki 1957), on the other hand, is useful to estimate the stratigraphy of the site. It is based on the simultaneous recording of microtremors in several stations deployed in an instrumental array. Through the spatial autocorrelation coefficient computation, the Rayleigh wave dispersion curve can be cleared. Finally the stratigraphy model (thickness, S and P wave velocity, and density of each layer) is estimated by fitting the theoretical dispersion curve with the observed one. The theoretical dispersion curve is initially computed using a proposed model. That model is modified several times until the theoretical curve fit the observations. This <span class="hlt">method</span> requires of a minimum of three stations where the microtremors are observed simultaneously in all the stations. We <span class="hlt">applied</span> the SPAC <span class="hlt">method</span> to six sites in Colima state, Mexico. Those sites are Santa Barbara, Cerro de Ortega, Tecoman, Manzanillo and two in Colima city. Totally 16 arrays were carried out using equilateral triangles with different apertures with a minimum of 5 m and a maximum of 60 m. For recording microtremors we used short period (5 seconds) velocity type vertical sensors connected to a K2 (Kinemetrics) acquisition system. We could estimate the velocities of the most superficial layers reaching different depths in each site. For Santa Bárbara site the exploration depth was about 30 m, for Tecoman 12 m, for Manzanillo 35 m, for Cerro de Ortega 68 m, and the deepest site exploration was obtained in Colima city with a depth of around 73 m. The S wave velocities</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JOUC...16..137S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JOUC...16..137S"><span>Performance comparison of two efficient genomic selection <span class="hlt">methods</span> (gsbay & MixP) <span class="hlt">applied</span> in aquacultural organisms</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin</p> <p>2017-02-01</p> <p>Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be <span class="hlt">applied</span> early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were <span class="hlt">applied</span> on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) <span class="hlt">method</span> which has been <span class="hlt">applied</span> widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be <span class="hlt">applied</span> for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JVGR..327..622B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JVGR..327..622B"><span>Performance of the 'material Failure Forecast <span class="hlt">Method</span>' in real-time situations: A Bayesian approach <span class="hlt">applied</span> on effusive and explosive eruptions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.; Arámbula-Mendoza, R.; Budi-Santoso, A.</p> <p>2016-11-01</p> <p>Most attempts of deterministic eruption forecasting are based on the material Failure Forecast <span class="hlt">Method</span> (FFM). This <span class="hlt">method</span> assumes that a precursory observable, such as the rate of seismic activity, can be described by a simple power law which presents a singularity at a time close to the eruption onset. Until now, this <span class="hlt">method</span> has been <span class="hlt">applied</span> only in a small number of cases, generally for forecasts in hindsight. In this paper, a rigorous Bayesian approach of the FFM designed for real-time applications is <span class="hlt">applied</span>. Using an automatic recognition system, seismo-volcanic events are detected and classified according to their physical mechanism and time series of probability distributions of the rates of events are calculated. At each time of observation, a Bayesian inversion provides estimations of the exponent of the power law and of the time of eruption, together with their probability density functions. Two criteria are defined in order to evaluate the quality and reliability of the forecasts. Our automated procedure has allowed the analysis of long, continuous seismic time series: 13 years from Volcán de Colima, Mexico, 10 years from Piton de la Fournaise, Reunion Island, France, and several months from Merapi volcano, Java, Indonesia. The new forecasting approach has been <span class="hlt">applied</span> to 64 pre-eruptive sequences which present various types of dominant seismic activity (volcano-tectonic or long-period events) and patterns of seismicity with different level of complexity. This has allowed us to test the FFM assumptions, to determine in which conditions the <span class="hlt">method</span> can be <span class="hlt">applied</span>, and to quantify the success rate of the forecasts. 62% of the precursory sequences analysed are suitable for the application of FFM and half of the total number of eruptions are successfully forecast in hindsight. In real-time, the <span class="hlt">method</span> allows for the successful forecast of 36% of all the eruptions considered. Nevertheless, real-time forecasts are successful for 83% of the cases that fulfil the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG52A..05G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG52A..05G"><span>Global Sensitivity <span class="hlt">Applied</span> to Dynamic Combined Finite Discrete Element <span class="hlt">Methods</span> for Fracture Simulation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.</p> <p>2017-12-01</p> <p>Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element <span class="hlt">method</span>, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity <span class="hlt">method</span> we <span class="hlt">apply</span> is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity <span class="hlt">method</span> to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA468859','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA468859"><span>Techniques for Cyber Attack Attribution</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2003-10-01</p> <p>Asaka, Midori, Shunji Okazawa, Atsushi <span class="hlt">Taguchi</span>, and Shigeki Goto. June 1999. “A <span class="hlt">Method</span> of Tracing Intruders by Use of Mobile Agents”, INET’99. http...Tsuchiya, Takefumi Onabuta, Shunji Okazawa, and Shigeki Goto. November 1999. “Local Attack Detection and Intrusion Route Tracing”, IEICE Transaction on</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19055058','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19055058"><span>[The diagnostic <span class="hlt">methods</span> <span class="hlt">applied</span> in mycology].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kurnatowska, Alicja; Kurnatowski, Piotr</p> <p>2008-01-01</p> <p>The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining <span class="hlt">methods</span>, cultures on different media and non-culture-based <span class="hlt">methods</span>) are presented in article.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/32681','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/32681"><span><span class="hlt">Applying</span> scrum <span class="hlt">methods</span> to ITS projects.</span></a></p> <p><a target="_blank" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2017-08-01</p> <p>The introduction of new technology generally brings new challenges and new <span class="hlt">methods</span> to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29197411','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29197411"><span>Systematic reviews of health economic evaluations: a protocol for a systematic review of characteristics and <span class="hlt">methods</span> <span class="hlt">applied</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Luhnen, Miriam; Prediger, Barbara; Neugebauer, Edmund A M; Mathes, Tim</p> <p>2017-12-02</p> <p>The number of systematic reviews of economic evaluations is steadily increasing. This is probably related to the continuing pressure on health budgets worldwide which makes an efficient resource allocation increasingly crucial. In particular in recent years, the introduction of several high-cost interventions presents enormous challenges regarding universal accessibility and sustainability of health care systems. An increasing number of health authorities, inter alia, feel the need for analyzing economic evidence. Economic evidence might effectively be generated by means of systematic reviews. Nevertheless, no standard <span class="hlt">methods</span> seem to exist for their preparation so far. The objective of this study was to analyze the <span class="hlt">methods</span> <span class="hlt">applied</span> for systematic reviews of health economic evaluations (SR-HE) with a focus on the identification of common challenges. The planned study is a systematic review of the characteristics and <span class="hlt">methods</span> actually <span class="hlt">applied</span> in SR-HE. We will combine validated search filters developed for the retrieval of economic evaluations and systematic reviews to identify relevant studies in MEDLINE (via Ovid, 2015-present). To be eligible for inclusion, studies have to conduct a systematic review of full economic evaluations. Articles focusing exclusively on methodological aspects and secondary publications of health technology assessment (HTA) reports will be excluded. Two reviewers will independently assess titles and abstracts and then full-texts of studies for eligibility. Methodological features will be extracted in a standardized, beforehand piloted data extraction form. Data will be summarized with descriptive statistical measures and systematically analyzed focusing on differences/similarities and methodological weaknesses. The systematic review will provide a detailed overview of characteristics of SR-HE and the <span class="hlt">applied</span> <span class="hlt">methods</span>. Differences and methodological shortcomings will be detected and their implications will be discussed. The findings of our</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ResPh...8..114R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ResPh...8..114R"><span>Exact traveling wave solutions of fractional order Boussinesq-like equations by <span class="hlt">applying</span> Exp-function <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahmatullah; Ellahi, Rahmat; Mohyud-Din, Syed Tauseef; Khan, Umar</p> <p>2018-03-01</p> <p>We have computed new exact traveling wave solutions, including complex solutions of fractional order Boussinesq-Like equations, occurring in physical sciences and engineering, by <span class="hlt">applying</span> Exp-function <span class="hlt">method</span>. The <span class="hlt">method</span> is blended with fractional complex transformation and modified Riemann-Liouville fractional order operator. Our obtained solutions are verified by substituting back into their corresponding equations. To the best of our knowledge, no other technique has been reported to cope with the said fractional order nonlinear problems combined with variety of exact solutions. Graphically, fractional order solution curves are shown to be strongly related to each other and most importantly, tend to fixate on their integer order solution curve. Our solutions comprise high frequencies and very small amplitude of the wave responses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1512286J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1512286J"><span><span class="hlt">Applying</span> the seismic interferometry <span class="hlt">method</span> to vertical seismic profile data using tunnel excavation noise as source</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jurado, Maria Jose; Teixido, Teresa; Martin, Elena; Segarra, Miguel; Segura, Carlos</p> <p>2013-04-01</p> <p>In the frame of the research conducted to develop efficient strategies for investigation of rock properties and fluids ahead of tunnel excavations the seismic interferometry <span class="hlt">method</span> was <span class="hlt">applied</span> to analyze the data acquired in boreholes instrumented with geophone strings. The results obtained confirmed that seismic interferometry provided an improved resolution of petrophysical properties to identify heterogeneities and geological structures ahead of the excavation. These features are beyond the resolution of other conventional geophysical <span class="hlt">methods</span> but can be the cause severe problems in the excavation of tunnels. Geophone strings were used to record different types of seismic noise generated at the tunnel head during excavation with a tunnelling machine and also during the placement of the rings covering the tunnel excavation. In this study we show how tunnel construction activities have been characterized as source of seismic signal and used in our research as the seismic source signal for generating a 3D reflection seismic survey. The data was recorded in vertical water filled borehole with a borehole seismic string at a distance of 60 m from the tunnel trace. A reference pilot signal was obtained from seismograms acquired close the tunnel face excavation in order to obtain best signal-to-noise ratio to be used in the interferometry processing (Poletto et al., 2010). The seismic interferometry <span class="hlt">method</span> (Claerbout 1968) was successfully <span class="hlt">applied</span> to image the subsurface geological structure using the seismic wave field generated by tunneling (tunnelling machine and construction activities) recorded with geophone strings. This technique was <span class="hlt">applied</span> simulating virtual shot records related to the number of receivers in the borehole with the seismic transmitted events, and processing the data as a reflection seismic survey. The pseudo reflective wave field was obtained by cross-correlation of the transmitted wave data. We <span class="hlt">applied</span> the relationship between the transmission</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25712814','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25712814"><span><span class="hlt">Applying</span> under-sampling techniques and cost-sensitive learning <span class="hlt">methods</span> on risk assessment of breast cancer.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho</p> <p>2015-04-01</p> <p>Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening <span class="hlt">methods</span> may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first <span class="hlt">apply</span> the sampling techniques and dimension reduction <span class="hlt">method</span> to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble <span class="hlt">methods</span>, and cost-sensitive <span class="hlt">methods</span>) to predict the risk. The cost-sensitive <span class="hlt">method</span> with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive <span class="hlt">method</span> with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2103B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2103B"><span>Study of Effects on Mechanical Properties of PLA Filament which is blended with Recycled PLA Materials</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Babagowda; Kadadevara Math, R. S.; Goutham, R.; Srinivas Prasad, K. R.</p> <p>2018-02-01</p> <p>Fused deposition modeling is a rapidly growing additive manufacturing technology due to its ability to build functional parts having complex geometry. The mechanical properties of the build part is depends on several process parameters and build material of the printed specimen. The aim of this study is to characterize and optimize the parameters such as layer thickness and PLA build material which is mixed with recycled PLA material. Tensile and flexural or bending test are carried out to determine the mechanical response characteristics of the printed specimen. <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used for number of experiments and <span class="hlt">Taguchi</span> S/N ratio is used to identify the set of parameters which give good results for respective response characteristics, effectiveness of each parameters is investigated by using analysis of variance (ANOVA).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=animal+AND+experimentation&pg=2&id=EJ808110','ERIC'); return false;" href="https://eric.ed.gov/?q=animal+AND+experimentation&pg=2&id=EJ808110"><span>Bootstrapping <span class="hlt">Methods</span> <span class="hlt">Applied</span> for Simulating Laboratory Works</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Prodan, Augustin; Campean, Remus</p> <p>2005-01-01</p> <p>Purpose: The aim of this work is to implement bootstrapping <span class="hlt">methods</span> into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical <span class="hlt">methods</span> to infer the truth from sample…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..226a2162S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..226a2162S"><span>A Review of Metal Injection Molding- Process, Optimization, Defects and Microwave Sintering on WC-Co Cemented Carbide</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shahbudin, S. N. A.; Othman, M. H.; Amin, Sri Yulis M.; Ibrahim, M. H. I.</p> <p>2017-08-01</p> <p>This article is about a review of optimization of metal injection molding and microwave sintering process on tungsten cemented carbide produce by metal injection molding process. In this study, the process parameters for the metal injection molding were optimized using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. <span class="hlt">Taguchi</span> <span class="hlt">methods</span> have been used widely in engineering analysis to optimize the performance characteristics through the setting of design parameters. Microwave sintering is a process generally being used in powder metallurgy over the conventional <span class="hlt">method</span>. It has typical characteristics such as accelerated heating rate, shortened processing cycle, high energy efficiency, fine and homogeneous microstructure, and enhanced mechanical performance, which is beneficial to prepare nanostructured cemented carbides in metal injection molding. Besides that, with an advanced and promising technology, metal injection molding has proven that can produce cemented carbides. Cemented tungsten carbide hard metal has been used widely in various applications due to its desirable combination of mechanical, physical, and chemical properties. Moreover, areas of study include common defects in metal injection molding and application of microwave sintering itself has been discussed in this paper.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5367458','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5367458"><span>An evaluation of exact matching and propensity score <span class="hlt">methods</span> as <span class="hlt">applied</span> in a comparative effectiveness study of inhaled corticosteroids in asthma</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Burden, Anne; Roche, Nicolas; Miglio, Cristiana; Hillyer, Elizabeth V; Postma, Dirkje S; Herings, Ron MC; Overbeek, Jetty A; Khalid, Javaria Mona; van Eickels, Daniela; Price, David B</p> <p>2017-01-01</p> <p>Background Cohort matching and regression modeling are used in observational studies to control for confounding factors when estimating treatment effects. Our objective was to evaluate exact matching and propensity score <span class="hlt">methods</span> by <span class="hlt">applying</span> them in a 1-year pre–post historical database study to investigate asthma-related outcomes by treatment. <span class="hlt">Methods</span> We drew on longitudinal medical record data in the PHARMO database for asthma patients prescribed the treatments to be compared (ciclesonide and fine-particle inhaled corticosteroid [ICS]). Propensity score <span class="hlt">methods</span> that we evaluated were propensity score matching (PSM) using two different algorithms, the inverse probability of treatment weighting (IPTW), covariate adjustment using the propensity score, and propensity score stratification. We defined balance, using standardized differences, as differences of <10% between cohorts. Results Of 4064 eligible patients, 1382 (34%) were prescribed ciclesonide and 2682 (66%) fine-particle ICS. The IPTW and propensity score-based <span class="hlt">methods</span> retained more patients (96%–100%) than exact matching (90%); exact matching selected less severe patients. Standardized differences were >10% for four variables in the exact-matched dataset and <10% for both PSM algorithms and the weighted pseudo-dataset used in the IPTW <span class="hlt">method</span>. With all <span class="hlt">methods</span>, ciclesonide was associated with better 1-year asthma-related outcomes, at one-third the prescribed dose, than fine-particle ICS; results varied slightly by <span class="hlt">method</span>, but direction and statistical significance remained the same. Conclusion We found that each <span class="hlt">method</span> has its particular strengths, and we recommend at least two <span class="hlt">methods</span> be <span class="hlt">applied</span> for each matched cohort study to evaluate the robustness of the findings. Balance diagnostics should be <span class="hlt">applied</span> with all <span class="hlt">methods</span> to check the balance of confounders between treatment cohorts. If exact matching is used, the calculation of a propensity score could be useful to identify variables that require</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EnOp...48.1474T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EnOp...48.1474T"><span>Simultaneous planning of the project scheduling and material procurement problem under the presence of multiple suppliers</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tabrizi, Babak H.; Ghaderi, Seyed Farid</p> <p>2016-09-01</p> <p>Simultaneous planning of project scheduling and material procurement can improve the project execution costs. Hence, the issue has been addressed here by a mixed-integer programming model. The proposed model facilitates the procurement decisions by accounting for a number of suppliers offering a distinctive discount formula from which to purchase the required materials. It is aimed at developing schedules with the best net present value regarding the obtained benefit and costs of the project execution. A genetic algorithm is <span class="hlt">applied</span> to deal with the problem, in addition to a modified version equipped with a variable neighbourhood search. The underlying factors of the solution <span class="hlt">methods</span> are calibrated by the <span class="hlt">Taguchi</span> <span class="hlt">method</span> to obtain robust solutions. The performance of the aforementioned <span class="hlt">methods</span> is compared for different problem sizes, in which the utilized local search proved efficient. Finally, a sensitivity analysis is carried out to check the effect of inflation on the objective function value.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJEEP..16...77L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJEEP..16...77L"><span>Risky Group Decision-Making <span class="hlt">Method</span> for Distribution Grid Planning</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang</p> <p>2015-12-01</p> <p>With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making <span class="hlt">method</span> for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-<span class="hlt">Taguchi</span> system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this <span class="hlt">method</span> is illustrated by examples which prove the <span class="hlt">method</span> is more valuable and superiority than the other.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22611397-projection-reduction-method-applied-deriving-non-linear-optical-conductivity-electron-impurity-system','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22611397-projection-reduction-method-applied-deriving-non-linear-optical-conductivity-electron-impurity-system"><span>Projection-reduction <span class="hlt">method</span> <span class="hlt">applied</span> to deriving non-linear optical conductivity for an electron-impurity system</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kang, Nam Lyong; Lee, Sang-Seok; Graduate School of Engineering, Tottori University, 4-101 Koyama-Minami, Tottori</p> <p>2013-07-15</p> <p>The projection-reduction <span class="hlt">method</span> introduced by the present authors is known to give a validated theory for optical transitions in the systems of electrons interacting with phonons. In this work, using this <span class="hlt">method</span>, we derive the linear and first order nonlinear optical conductivites for an electron-impurity system and examine whether the expressions faithfully satisfy the quantum mechanical philosophy, in the same way as for the electron-phonon systems. The result shows that the Fermi distribution function for electrons, energy denominators, and electron-impurity coupling factors are contained properly in organized manners along with absorption of photons for each electron transition process in themore » final expressions. Furthermore, the result is shown to be represented properly by schematic diagrams, as in the formulation of electron-phonon interaction. Therefore, in conclusion, we claim that this <span class="hlt">method</span> can be <span class="hlt">applied</span> in modeling optical transitions of electrons interacting with both impurities and phonons.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28356782','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28356782"><span>An evaluation of exact matching and propensity score <span class="hlt">methods</span> as <span class="hlt">applied</span> in a comparative effectiveness study of inhaled corticosteroids in asthma.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Burden, Anne; Roche, Nicolas; Miglio, Cristiana; Hillyer, Elizabeth V; Postma, Dirkje S; Herings, Ron Mc; Overbeek, Jetty A; Khalid, Javaria Mona; van Eickels, Daniela; Price, David B</p> <p>2017-01-01</p> <p>Cohort matching and regression modeling are used in observational studies to control for confounding factors when estimating treatment effects. Our objective was to evaluate exact matching and propensity score <span class="hlt">methods</span> by <span class="hlt">applying</span> them in a 1-year pre-post historical database study to investigate asthma-related outcomes by treatment. We drew on longitudinal medical record data in the PHARMO database for asthma patients prescribed the treatments to be compared (ciclesonide and fine-particle inhaled corticosteroid [ICS]). Propensity score <span class="hlt">methods</span> that we evaluated were propensity score matching (PSM) using two different algorithms, the inverse probability of treatment weighting (IPTW), covariate adjustment using the propensity score, and propensity score stratification. We defined balance, using standardized differences, as differences of <10% between cohorts. Of 4064 eligible patients, 1382 (34%) were prescribed ciclesonide and 2682 (66%) fine-particle ICS. The IPTW and propensity score-based <span class="hlt">methods</span> retained more patients (96%-100%) than exact matching (90%); exact matching selected less severe patients. Standardized differences were >10% for four variables in the exact-matched dataset and <10% for both PSM algorithms and the weighted pseudo-dataset used in the IPTW <span class="hlt">method</span>. With all <span class="hlt">methods</span>, ciclesonide was associated with better 1-year asthma-related outcomes, at one-third the prescribed dose, than fine-particle ICS; results varied slightly by <span class="hlt">method</span>, but direction and statistical significance remained the same. We found that each <span class="hlt">method</span> has its particular strengths, and we recommend at least two <span class="hlt">methods</span> be <span class="hlt">applied</span> for each matched cohort study to evaluate the robustness of the findings. Balance diagnostics should be <span class="hlt">applied</span> with all <span class="hlt">methods</span> to check the balance of confounders between treatment cohorts. If exact matching is used, the calculation of a propensity score could be useful to identify variables that require balancing, thereby informing the choice of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26973437','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26973437"><span><span class="hlt">Methods</span> of <span class="hlt">applying</span> the 1994 case definition of chronic fatigue syndrome - impact on classification and observed illness characteristics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Unger, E R; Lin, J-M S; Tian, H; Gurbaxani, B M; Boneva, R S; Jones, J F</p> <p>2016-01-01</p> <p>Multiple case definitions are in use to identify chronic fatigue syndrome (CFS). Even when using the same definition, <span class="hlt">methods</span> used to <span class="hlt">apply</span> definitional criteria may affect results. The Centers for Disease Control and Prevention (CDC) conducted two population-based studies estimating CFS prevalence using the 1994 case definition; one relied on direct questions for criteria of fatigue, functional impairment and symptoms (1997 Wichita; <span class="hlt">Method</span> 1), and the other used subscale score thresholds of standardized questionnaires for criteria (2004 Georgia; <span class="hlt">Method</span> 2). Compared to previous reports the 2004 CFS prevalence estimate was higher, raising questions about whether changes in the <span class="hlt">method</span> of operationalizing affected this and illness characteristics. The follow-up of the Georgia cohort allowed direct comparison of both <span class="hlt">methods</span> of <span class="hlt">applying</span> the 1994 case definition. Of 1961 participants (53 % of eligible) who completed the detailed telephone interview, 919 (47 %) were eligible for and 751 (81 %) underwent clinical evaluation including medical/psychiatric evaluations. Data from the 499 individuals with complete data and without exclusionary conditions was available for this analysis. A total of 86 participants were classified as CFS by one or both <span class="hlt">methods</span>; 44 cases identified by both <span class="hlt">methods</span>, 15 only identified by <span class="hlt">Method</span> 1, and 27 only identified by <span class="hlt">Method</span> 2 (Kappa 0.63; 95 % confidence interval [CI]: 0.53, 0.73 and concordance 91.59 %). The CFS group identified by both <span class="hlt">methods</span> were more fatigued, had worse functioning, and more symptoms than those identified by only one <span class="hlt">method</span>. Moderate to severe depression was noted in only one individual who was classified as CFS by both <span class="hlt">methods</span>. When comparing the CFS groups identified by only one <span class="hlt">method</span>, those only identified by <span class="hlt">Method</span> 2 were either similar to or more severely affected in fatigue, function, and symptoms than those only identified by <span class="hlt">Method</span> 1. The two <span class="hlt">methods</span> demonstrated substantial concordance. While <span class="hlt">Method</span> 2</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11709815','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11709815"><span>Establishing an index arbitrage model by <span class="hlt">applying</span> neural networks <span class="hlt">method</span>--a case study of Nikkei 225 index.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, A P; Chianglin, C Y; Chung, H P</p> <p>2001-10-01</p> <p>This paper <span class="hlt">applies</span> the neural network <span class="hlt">method</span> to establish an index arbitrage model and compares the arbitrage performances to that from traditional cost of carry arbitrage model. From the empirical results of the Nikkei 225 stock index market, following conclusions can be stated: (1) The basis will get enlarged for a time period, more profitability may be obtained from the trend. (2) If the neural network is <span class="hlt">applied</span> within the index arbitrage model, twofold of return would be obtained than traditional arbitrage model can do. (3) If the T_basis has volatile trend, the neural network arbitrage model will ignore the peak. Although arbitrageur would lose the chance to get profit, they may reduce the market impact risk.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013CoPhC.184..469F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013CoPhC.184..469F"><span>Performance analysis of the FDTD <span class="hlt">method</span> <span class="hlt">applied</span> to holographic volume gratings: Multi-core CPU versus GPU computing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.</p> <p>2013-03-01</p> <p>The finite-difference time-domain <span class="hlt">method</span> (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The <span class="hlt">method</span> is <span class="hlt">applied</span> to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD <span class="hlt">method</span> including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix <span class="hlt">method</span> (MM) <span class="hlt">applied</span> to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM <span class="hlt">methods</span> is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23920682','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23920682"><span><span class="hlt">Method</span> to integrate clinical guidelines into the electronic health record (EHR) by <span class="hlt">applying</span> the archetypes approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro</p> <p>2013-01-01</p> <p>Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a <span class="hlt">method</span> for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed <span class="hlt">method</span> tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the <span class="hlt">method</span> are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed <span class="hlt">method</span> is that it is generic and can be <span class="hlt">applied</span> toany type of guideline.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JMoSt1074...85R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JMoSt1074...85R"><span>Synthesis procedure optimization and characterization of europium (III) tungstate nanoparticles</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahimi-Nasrabadi, Mehdi; Pourmortazavi, Seied Mahdi; Ganjali, Mohammad Reza; Reza Banan, Ali; Ahmadi, Farhad</p> <p>2014-09-01</p> <p><span class="hlt">Taguchi</span> robust design as a statistical <span class="hlt">method</span> was <span class="hlt">applied</span> for the optimization of process parameters in order to tunable, facile and fast synthesis of europium (III) tungstate nanoparticles. Europium (III) tungstate nanoparticles were synthesized by a chemical precipitation reaction involving direct addition of europium ion aqueous solution to the tungstate reagent solved in an aqueous medium. Effects of some synthesis procedure variables on the particle size of europium (III) tungstate nanoparticles were studied. Analysis of variance showed the importance of controlling tungstate concentration, cation feeding flow rate and temperature during preparation of europium (III) tungstate nanoparticles by the proposed chemical precipitation reaction. Finally, europium (III) tungstate nanoparticles were synthesized at the optimum conditions of the proposed <span class="hlt">method</span>. The morphology and chemical composition of the prepared nano-material were characterized by means of X-ray diffraction, scanning electron microscopy, transmission electron microscopy, FT-IR spectroscopy and fluorescence.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29271774','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29271774"><span>Fast projection/backprojection and incremental <span class="hlt">methods</span> <span class="hlt">applied</span> to synchrotron light tomographic reconstruction.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>de Lima, Camila; Salomão Helou, Elias</p> <p>2018-01-01</p> <p>Iterative <span class="hlt">methods</span> for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and <span class="hlt">applies</span> it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..302a2067K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..302a2067K"><span>Optimization of Selective Laser Melting by Evaluation <span class="hlt">Method</span> of Multiple Quality Characteristics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.</p> <p>2018-01-01</p> <p>Article describes the adoption of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009cmc..book..249H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009cmc..book..249H"><span><span class="hlt">Applied</span> Counterfactual Reasoning</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hendrickson, Noel</p> <p></p> <p>This chapter addresses two goals: The development of a structured <span class="hlt">method</span> to aid intelligence and security analysts in assessing counterfactuals, and forming a structured <span class="hlt">method</span> to educate (future) analysts in counterfactual reasoning. In order to pursue these objectives, I offer here an analysis of the purposes, problems, parts, and principles of <span class="hlt">applied</span> counterfactual reasoning. In particular, the ways in which antecedent scenarios are selected and the ways in which scenarios are developed constitute essential (albeit often neglected) aspects of counterfactual reasoning. Both must be addressed to <span class="hlt">apply</span> counterfactual reasoning effectively. Naturally, further issues remain, but these should serve as a useful point of departure. They are the beginning of a path to more rigorous and relevant counterfactual reasoning in intelligence analysis and counterterrorism.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20140005421&hterms=disadvantages&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Ddisadvantages','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20140005421&hterms=disadvantages&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Ddisadvantages"><span>A Numerical Combination of Extended Boundary Condition <span class="hlt">Method</span> and Invariant Imbedding <span class="hlt">Method</span> <span class="hlt">Applied</span> to Light Scattering by Large Spheroids and Cylinders</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.</p> <p>2013-01-01</p> <p>The extended boundary condition <span class="hlt">method</span> (EBCM) and invariant imbedding <span class="hlt">method</span> (IIM) are two fundamentally different T-matrix <span class="hlt">methods</span> for the solution of light scattering by nonspherical particles. The standard EBCM is very efficient but encounters a loss of precision when the particle size is large, the maximum size being sensitive to the particle aspect ratio. The IIM can be <span class="hlt">applied</span> to particles in a relatively large size parameter range but requires extensive computational time due to the number of spherical layers in the particle volume discretization. A numerical combination of the EBCM and the IIM (hereafter, the EBCM+IIM) is proposed to overcome the aforementioned disadvantages of each <span class="hlt">method</span>. Even though the EBCM can fail to obtain the T-matrix of a considered particle, it is valuable for decreasing the computational domain (i.e., the number of spherical layers) of the IIM by providing the initial T-matrix associated with an iterative procedure in the IIM. The EBCM+IIM is demonstrated to be more efficient than the IIM in obtaining the optical properties of large size parameter particles beyond the convergence limit of the EBCM. The numerical performance of the EBCM+IIM is illustrated through representative calculations in spheroidal and cylindrical particle cases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...857..103K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...857..103K"><span><span class="hlt">Applying</span> the Weighted Horizontal Magnetic Gradient <span class="hlt">Method</span> to a Simulated Flaring Active Region</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Korsós, M. B.; Chatterjee, P.; Erdélyi, R.</p> <p>2018-04-01</p> <p>Here, we test the weighted horizontal magnetic gradient (WG M ) as a flare precursor, introduced by Korsós et al., by <span class="hlt">applying</span> it to a magnetohydrodynamic (MHD) simulation of solar-like flares. The preflare evolution of the WG M and the behavior of the distance parameter between the area-weighted barycenters of opposite-polarity sunspots at various heights is investigated in the simulated δ-type sunspot. Four flares emanated from this sunspot. We found the optimum heights above the photosphere where the flare precursors of the WG M <span class="hlt">method</span> are identifiable prior to each flare. These optimum heights agree reasonably well with the heights of the occurrence of flares identified from the analysis of their thermal and ohmic heating signatures in the simulation. We also estimated the expected time of the flare onsets from the duration of the approaching–receding motion of the barycenters of opposite polarities before each single flare. The estimated onset time and the actual time of occurrence of each flare are in good agreement at the corresponding optimum heights. This numerical experiment further supports the use of flare precursors based on the WG M <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AcAau.146....7X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AcAau.146....7X"><span>Investigation of titanium dioxide/ tungstic acid -based photocatalyst for human excrement wastewater treatment</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Fei; Wang, Can; Xiao, Kemeng; Gao, Yufeng; Zhou, Tong; Xu, Heng</p> <p>2018-05-01</p> <p>An activated carbon (AC) coated with tungstic acid (WO3)/titanium dioxide (TiO2) nanocomposites photocatalytic material (ACWT) combined with Three-phase Fluidized Bed (TFB) was investigated for human excrement wastewater treatment. Under the ultraviolet (UV) and fluorescent lamp illumination, the ACWT had shown a good performance on chemical oxygen demand (COD) and total nitrogen (TN) removal but inefficient on ammonia nitrogen (NH3-N) removal. Optimized by <span class="hlt">Taguchi</span> <span class="hlt">method</span>, COD and TN removal efficiency was up to 88.39% and 55.07%, respectively. Among all the parameters, the dosage of ACWT had the largest contribution on the process. Bacterial community changes after treatment demonstrated that this photocatalytic system had a great sterilization effect on wastewater. These results confirmed that ACWT could be <span class="hlt">applied</span> for the human excrement wastewater treatment.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16354035','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16354035"><span>Moment analysis <span class="hlt">method</span> as <span class="hlt">applied</span> to the 2S --> 2P transition in cryogenic alkali metal/rare gas matrices.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Terrill Vosbein, Heidi A; Boatz, Jerry A; Kenney, John W</p> <p>2005-12-22</p> <p>The moment analysis <span class="hlt">method</span> (MA) has been tested for the case of 2S --> 2P ([core]ns1 --> [core]np1) transitions of alkali metal atoms (M) doped into cryogenic rare gas (Rg) matrices using theoretically validated simulations. Theoretical/computational M/Rg system models are constructed with precisely defined parameters that closely mimic known M/Rg systems. Monte Carlo (MC) techniques are then employed to generate simulated absorption and magnetic circular dichroism (MCD) spectra of the 2S --> 2P M/Rg transition to which the MA <span class="hlt">method</span> can be <span class="hlt">applied</span> with the goal of seeing how effective the MA <span class="hlt">method</span> is in re-extracting the M/Rg system parameters from these known simulated systems. The MA <span class="hlt">method</span> is summarized in general, and an assessment is made of the use of the MA <span class="hlt">method</span> in the rigid shift approximation typically used to evaluate M/Rg systems. The MC-MCD simulation technique is summarized, and validating evidence is presented. The simulation results and the assumptions used in <span class="hlt">applying</span> MA to M/Rg systems are evaluated. The simulation results on Na/Ar demonstrate that the MA <span class="hlt">method</span> does successfully re-extract the 2P spin-orbit coupling constant and Landé g-factor values initially used to build the simulations. However, assigning physical significance to the cubic and noncubic Jahn-Teller (JT) vibrational mode parameters in cryogenic M/Rg systems is not supported.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18754384','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18754384"><span>Determining optimal operation parameters for reducing PCDD/F emissions (I-TEQ values) from the iron ore sintering process by using the <span class="hlt">Taguchi</span> experimental design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh</p> <p>2008-07-15</p> <p>This study is the first one using the <span class="hlt">Taguchi</span> experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AcSpA.177...86I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AcSpA.177...86I"><span><span class="hlt">Methods</span> and methodology for FTIR spectral correction of channel spectra and uncertainty, <span class="hlt">applied</span> to ferrocene</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.</p> <p>2017-04-01</p> <p>We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new <span class="hlt">method</span> for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. <span class="hlt">Methods</span> for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are <span class="hlt">applied</span> to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015amos.confE..32F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015amos.confE..32F"><span>Statistical Track-Before-Detect <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Faint Optical Observations of Resident Space Objects</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.</p> <p></p> <p>Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust <span class="hlt">methods</span> in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD <span class="hlt">methods</span> <span class="hlt">applied</span> so far to SSA, such as the stacking <span class="hlt">method</span> or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to <span class="hlt">apply</span> a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S13C0677B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S13C0677B"><span>Non-Invasive Seismic <span class="hlt">Methods</span> for Earthquake Site Classification <span class="hlt">Applied</span> to Ontario Bridge Sites</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.</p> <p>2017-12-01</p> <p>How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling <span class="hlt">methods</span>; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) <span class="hlt">methods</span> are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave <span class="hlt">methods</span> are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We <span class="hlt">apply</span> our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA286738','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA286738"><span>Proceedings of the International Conference on II-VI Compounds and Related Optoelectronic Materials (6th) Held in Newport, Rhode Island on 13-17 September 1993</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1993-09-17</p> <p><span class="hlt">Taguchi</span> and A. Hiraki , J. Crystal Growth 89 (1988) 331. In summary, we have reported the MOCVD [61 P.J. Wright and B. Cockayne. J. Crystal Growth 59...Kawakami, T. <span class="hlt">Taguchi</span> and A. Hiraki , J. Crystalwas located between E,, - E,, 17 meV (the Growth 89 (1988) 331. bound exciton energy) and Eex - E, = 0 (the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JCoPh.263..283N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JCoPh.263..283N"><span>A finite elements <span class="hlt">method</span> to solve the Bloch-Torrey equation <span class="hlt">applied</span> to diffusion magnetic resonance imaging</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, Dang Van; Li, Jing-Rebecca; Grebenkov, Denis; Le Bihan, Denis</p> <p>2014-04-01</p> <p>The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements <span class="hlt">method</span> that allows jumps in the solution at the cell interfaces by using double nodes. Using a transformation of the Bloch-Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge-Kutta-Chebyshev time-stepping <span class="hlt">method</span>. Our proposed <span class="hlt">method</span> is second order accurate in space and second order accurate in time. We implemented this <span class="hlt">method</span> on the FEniCS C++ platform and show time and spatial convergence results. Finally, this <span class="hlt">method</span> is <span class="hlt">applied</span> to study some relevant questions in diffusion MRI.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060036182&hterms=operation+management&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Doperation%2Bmanagement','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060036182&hterms=operation+management&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Doperation%2Bmanagement"><span>Nickel-Cadmium Battery Operation Management Optimization Using Robust Design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador</p> <p>1996-01-01</p> <p>In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a <span class="hlt">Taguchi</span> <span class="hlt">Methods</span> application.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23388278','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23388278"><span><span class="hlt">Applying</span> usability <span class="hlt">methods</span> to identify health literacy issues: an example using a Personal Health Record.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Monkman, Helen; Kushniruk, Andre</p> <p>2013-01-01</p> <p>The prevalence of consumer health information systems is increasing. However, usability and health literacy impact both the value and adoption of these systems. Health literacy and usability are closely related in that systems may not be used accurately if users cannot understand the information therein. Thus, it is imperative to focus on mitigating the demands on health literacy in consumer health information systems. This study modified two usability evaluation <span class="hlt">methods</span> (heuristic evaluation and usability testing) to incorporate the identification of potential health literacy issues in a Personal Health Record (PHR). Heuristic evaluation is an analysis of a system performed by a usability specialist who evaluates how well the system abides by usability principles. In contrast, a usability test involves a post hoc analysis of a representative user interacting with the system. These two <span class="hlt">methods</span> revealed several health literacy issues and suggestions to ameliorate them were made. Thus, it was demonstrated that usability <span class="hlt">methods</span> could be successfully augmented for the purpose of investigating health literacy issues. To improve users' health knowledge, the adoption of consumer health information systems, and the accuracy of the information contained therein, it is encouraged that usability <span class="hlt">methods</span> be <span class="hlt">applied</span> with an added focus on health literacy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28129578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28129578"><span><span class="hlt">Methods</span> and methodology for FTIR spectral correction of channel spectra and uncertainty, <span class="hlt">applied</span> to ferrocene.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T</p> <p>2017-04-15</p> <p>We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new <span class="hlt">method</span> for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χ r 2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. <span class="hlt">Methods</span> for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are <span class="hlt">applied</span> to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d 10 ) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AcSpA.190....1A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AcSpA.190....1A"><span>Different spectrophotometric <span class="hlt">methods</span> <span class="hlt">applied</span> for the analysis of simeprevir in the presence of its oxidative degradation product: Acomparative study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed</p> <p>2018-02-01</p> <p>Five simple spectrophotometric <span class="hlt">methods</span> were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These <span class="hlt">methods</span> are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed <span class="hlt">methods</span> was tested using laboratory prepared mixtures and assessed by <span class="hlt">applying</span> the standard addition technique. Furthermore, these <span class="hlt">methods</span> were statistically comparable to RP-HPLC <span class="hlt">method</span> and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SoPh..293...68B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SoPh..293...68B"><span>The Global Survey <span class="hlt">Method</span> <span class="hlt">Applied</span> to Ground-level Cosmic Ray Measurements</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.</p> <p>2018-04-01</p> <p>The global survey <span class="hlt">method</span> (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and <span class="hlt">applied</span> in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The <span class="hlt">method</span> developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3365060','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3365060"><span>DISCO-SCA and Properly <span class="hlt">Applied</span> GSVD as Swinging <span class="hlt">Methods</span> to Find Common and Distinctive Processes</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Van Deun, Katrijn; Van Mechelen, Iven; Thorrez, Lieven; Schouteden, Martijn; De Moor, Bart; van der Werf, Mariët J.; De Lathauwer, Lieven; Smilde, Age K.; Kiers, Henk A. L.</p> <p>2012-01-01</p> <p>Background In systems biology it is common to obtain for the same set of biological entities information from multiple sources. Examples include expression data for the same set of orthologous genes screened in different organisms and data on the same set of culture samples obtained with different high-throughput techniques. A major challenge is to find the important biological processes underlying the data and to disentangle therein processes common to all data sources and processes distinctive for a specific source. Recently, two promising simultaneous data integration <span class="hlt">methods</span> have been proposed to attain this goal, namely generalized singular value decomposition (GSVD) and simultaneous component analysis with rotation to common and distinctive components (DISCO-SCA). Results Both theoretical analyses and applications to biologically relevant data show that: (1) straightforward applications of GSVD yield unsatisfactory results, (2) DISCO-SCA performs well, (3) provided proper pre-processing and algorithmic adaptations, GSVD reaches a performance level similar to that of DISCO-SCA, and (4) DISCO-SCA is directly generalizable to more than two data sources. The biological relevance of DISCO-SCA is illustrated with two applications. First, in a setting of comparative genomics, it is shown that DISCO-SCA recovers a common theme of cell cycle progression and a yeast-specific response to pheromones. The biological annotation was obtained by <span class="hlt">applying</span> Gene Set Enrichment Analysis in an appropriate way. Second, in an application of DISCO-SCA to metabolomics data for Escherichia coli obtained with two different chemical analysis platforms, it is illustrated that the metabolites involved in some of the biological processes underlying the data are detected by one of the two platforms only; therefore, platforms for microbial metabolomics should be tailored to the biological question. Conclusions Both DISCO-SCA and properly <span class="hlt">applied</span> GSVD are promising integrative <span class="hlt">methods</span> for</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=statistical+AND+process+AND+control&pg=2&id=EJ511887','ERIC'); return false;" href="https://eric.ed.gov/?q=statistical+AND+process+AND+control&pg=2&id=EJ511887"><span><span class="hlt">Applied</span> Behavior Analysis and Statistical Process Control?</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hopkins, B. L.</p> <p>1995-01-01</p> <p>Incorporating statistical process control (SPC) <span class="hlt">methods</span> into <span class="hlt">applied</span> behavior analysis is discussed. It is claimed that SPC <span class="hlt">methods</span> would likely reduce <span class="hlt">applied</span> behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CPM...tmp....2F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CPM...tmp....2F"><span>Meshless Lagrangian SPH <span class="hlt">method</span> <span class="hlt">applied</span> to isothermal lid-driven cavity flow at low-Re numbers</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fraga Filho, C. A. D.; Chacaltana, J. T. A.; Pinto, W. J. N.</p> <p>2018-01-01</p> <p>SPH is a recent particle <span class="hlt">method</span> <span class="hlt">applied</span> in the cavities study, without many results available in the literature. The lid-driven cavity flow is a classic problem of the fluid mechanics, extensively explored in the literature and presenting a considerable complexity. The aim of this paper is to present a solution from the Lagrangian viewpoint for this problem. The discretization of the continuum domain is performed using the Lagrangian particles. The physical laws of mass, momentum and energy conservation are presented by the Navier-Stokes equations. A serial numerical code, written in Fortran programming language, has been used to perform the numerical simulations. The application of the SPH and comparison with the literature (mesh <span class="hlt">methods</span> and a meshless collocation <span class="hlt">method</span>) have been done. The positions of the primary vortex centre and the non-dimensional velocity profiles passing through the geometric centre of the cavity have been analysed. The numerical Lagrangian results showed a good agreement when compared to the results found in the literature, specifically for { Re} < 100.00 . Suggestions for improvements in the SPH model presented are listed, in the search for better results for flows with higher Reynolds numbers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007APS..MARB27002T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007APS..MARB27002T"><span>Self-Learning Off-Lattice Kinetic Monte Carlo <span class="hlt">method</span> as <span class="hlt">applied</span> to growth on metal surfaces</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Trushin, Oleg; Kara, Abdelkader; Rahman, Talat</p> <p>2007-03-01</p> <p>We propose a new development in the Self-Learning Kinetic Monte Carlo (SLKMC) <span class="hlt">method</span> with the goal of improving the accuracy with which atomic mechanisms controlling diffusive processes on metal surfaces may be identified. This is important for diffusion of small clusters (2 - 20 atoms) in which atoms may occupy Off-Lattice positions. Such a procedure is also necessary for consideration of heteroepitaxial growth. The new technique combines an earlier version of SLKMC [1] with the inclusion of off-lattice occupancy. This allows us to include arbitrary positions of adatoms in the modeling and makes the simulations more realistic and reliable. We have tested this new approach for the case of the diffusion of small 2D Cu clusters diffusion on Cu(111) and found good performance and satisfactory agreement with results obtained from previous version of SLKMC. The new <span class="hlt">method</span> also helped reveal a novel atomic mechanism contributing to cluster migration. We have also <span class="hlt">applied</span> this <span class="hlt">method</span> to study the diffusion of Cu clusters on Ag(111), and find that Cu atoms generally prefer to occupy off-lattice sites. [1] O. Trushin, A. Kara, A. Karim, T.S. Rahman Phys. Rev B 2005</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..160a2030F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..160a2030F"><span>The effect of fibre content, fibre size and alkali treatment to Charpy impact resistance of Oil Palm fibre reinforced composite material</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fitri, Muhamad; Mahzan, Shahruddin</p> <p>2016-11-01</p> <p>In this research, the effect of fibre content, fibre size and alkali treatment to the impact resistance of the composite material have been investigated, The composite material employs oil palm fibre as the reinforcement material whereas the matrix used for the composite materials are polypropylene. The Oil Palm fibres are prepared for two conditions: alkali treated fibres and untreated fibres. The fibre sizes are varied in three sizes: 5mm, 7mm and 10mm. During the composite material preparation, the fibre contents also have been varied into 3 different percentages: 5%, 7% and 10%. The statistical approach is used to optimise the variation of specimen determined by using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The results were analyzed also by the <span class="hlt">Taguchi</span> <span class="hlt">method</span> and shows that the Oil Palm fibre content is significantly affect the impact resistance of the polymer matrix composite. However, the fibre size is moderately affecting the impact resistance, whereas the fibre treatment is insignificant to the impact resistance of the oil palm fibre reinforced polymer matrix composite.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22980863','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22980863"><span>Optimisation of flavour ester biosynthesis in an aqueous system of coconut cream and fusel oil catalysed by lipase.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan</p> <p>2012-12-15</p> <p>Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The <span class="hlt">Taguchi</span> design <span class="hlt">method</span> was used for the first time to optimize the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the optimal conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the <span class="hlt">Taguchi</span> design <span class="hlt">method</span> was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5619351','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5619351"><span>Specific algorithm <span class="hlt">method</span> of scoring the Clock Drawing Test <span class="hlt">applied</span> in cognitively normal elderly</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mendes-Santos, Liana Chaves; Mograbi, Daniel; Spenciere, Bárbara; Charchat-Fichman, Helenice</p> <p>2015-01-01</p> <p>The Clock Drawing Test (CDT) is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be <span class="hlt">applied</span> in different ways and scoring procedures also vary. Objective The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm <span class="hlt">method</span> adapted from Sunderland et al. (1989). <span class="hlt">Methods</span> We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn") and Mini-Mental State Examination (MMSE) were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. Results and Conclusion A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated"), equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%). The CDT specific algorithm <span class="hlt">method</span> used had high inter-rater reliability (p<0.01), and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging. PMID:29213954</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22314860-finite-elements-method-solve-blochtorrey-equation-applied-diffusion-magnetic-resonance-imaging','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22314860-finite-elements-method-solve-blochtorrey-equation-applied-diffusion-magnetic-resonance-imaging"><span>A finite elements <span class="hlt">method</span> to solve the Bloch–Torrey equation <span class="hlt">applied</span> to diffusion magnetic resonance imaging</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nguyen, Dang Van; NeuroSpin, Bat145, Point Courrier 156, CEA Saclay Center, 91191 Gif-sur-Yvette Cedex; Li, Jing-Rebecca, E-mail: jingrebecca.li@inria.fr</p> <p>2014-04-15</p> <p>The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch–Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements <span class="hlt">method</span> that allows jumps in the solution atmore » the cell interfaces by using double nodes. Using a transformation of the Bloch–Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge–Kutta–Chebyshev time-stepping <span class="hlt">method</span>. Our proposed <span class="hlt">method</span> is second order accurate in space and second order accurate in time. We implemented this <span class="hlt">method</span> on the FEniCS C++ platform and show time and spatial convergence results. Finally, this <span class="hlt">method</span> is <span class="hlt">applied</span> to study some relevant questions in diffusion MRI.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/4284506','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/biblio/4284506"><span>ALLOY COATINGS AND <span class="hlt">METHOD</span> OF <span class="hlt">APPLYING</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Eubank, L.D.; Boller, E.R.</p> <p>1958-08-26</p> <p>A <span class="hlt">method</span> for providing uranium articles with a pro tective coating by a single dip coating process is presented. The uranium article is dipped into a molten zinc bath containing a small percentage of aluminum. The resultant product is a uranium article covered with a thin undercoat consisting of a uranium-aluminum alloy with a small amount of zinc, and an outer layer consisting of zinc and aluminum. The article may be used as is, or aluminum sheathing may then be bonded to the aluminum zinc outer layer.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA468706','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA468706"><span>Relaxed Fidelity CFD <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Store Separation Problems</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2004-06-01</p> <p>accuracy-productivity characteristics of influence function <span class="hlt">methods</span> and time-accurate CFD <span class="hlt">methods</span>. Two <span class="hlt">methods</span> are presented in this paper, both of...which provide significant accuracy improvements over influence function <span class="hlt">methods</span> while providing rapid enough turn around times to support parameter and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AcSpA.156...54A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AcSpA.156...54A"><span>Effect of genetic algorithm as a variable selection <span class="hlt">method</span> on different chemometric models <span class="hlt">applied</span> for the analysis of binary mixture of amoxicillin and flucloxacillin: A comparative study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed</p> <p>2016-03-01</p> <p>Different chemometric models were <span class="hlt">applied</span> for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All <span class="hlt">methods</span> were <span class="hlt">applied</span> with and without variable selection procedure (genetic algorithm GA). The <span class="hlt">methods</span> were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by <span class="hlt">applying</span> GA. The proposed <span class="hlt">methods</span> were found to be rapid, simple and required no preliminary separation steps.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26957538','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26957538"><span><span class="hlt">Applied</span> Nanotoxicology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hobson, David W; Roberts, Stephen M; Shvedova, Anna A; Warheit, David B; Hinkley, Georgia K; Guy, Robin C</p> <p>2016-01-01</p> <p>Nanomaterials, including nanoparticles and nanoobjects, are being incorporated into everyday products at an increasing rate. These products include consumer products of interest to toxicologists such as pharmaceuticals, cosmetics, food, food packaging, household products, and so on. The manufacturing of products containing or utilizing nanomaterials in their composition may also present potential toxicologic concerns in the workplace. The molecular complexity and composition of these nanomaterials are ever increasing, and the means and <span class="hlt">methods</span> being <span class="hlt">applied</span> to characterize and perform useful toxicologic assessments are rapidly advancing. This article includes presentations by experienced toxicologists in the nanotoxicology community who are focused on the <span class="hlt">applied</span> aspect of the discipline toward supporting state of the art toxicologic assessments for food products and packaging, pharmaceuticals and medical devices, inhaled nanoparticle and gastrointestinal exposures, and addressing occupational safety and health issues and concerns. This symposium overview article summarizes 5 talks that were presented at the 35th Annual meeting of the American College of Toxicology on the subject of "<span class="hlt">Applied</span> Nanotechnology." © The Author(s) 2016.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5370558','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5370558"><span><span class="hlt">Applied</span> Nanotoxicology</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hobson, David W.; Roberts, Stephen M.; Shvedova, Anna A.; Warheit, David B.; Hinkley, Georgia K.; Guy, Robin C.</p> <p>2016-01-01</p> <p>Nanomaterials, including nanoparticles and nanoobjects, are being incorporated into everyday products at an increasing rate. These products include consumer products of interest to toxicologists such as pharmaceuticals, cosmetics, food, food packaging, household products, and so on. The manufacturing of products containing or utilizing nanomaterials in their composition may also present potential toxicologic concerns in the workplace. The molecular complexity and composition of these nanomaterials are ever increasing, and the means and <span class="hlt">methods</span> being <span class="hlt">applied</span> to characterize and perform useful toxicologic assessments are rapidly advancing. This article includes presentations by experienced toxicologists in the nanotoxicology community who are focused on the <span class="hlt">applied</span> aspect of the discipline toward supporting state of the art toxicologic assessments for food products and packaging, pharmaceuticals and medical devices, inhaled nanoparticle and gastrointestinal exposures, and addressing occupational safety and health issues and concerns. This symposium overview article summarizes 5 talks that were presented at the 35th Annual meeting of the American College of Toxicology on the subject of “<span class="hlt">Applied</span> Nanotechnology.” PMID:26957538</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22315545','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22315545"><span>Binary fingerprints at fluctuation-enhanced sensing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chang, Hung-Chih; Kish, Laszlo B; King, Maria D; Kwan, Chiman</p> <p>2010-01-01</p> <p>We have developed a simple way to generate binary patterns based on spectral slopes in different frequency ranges at fluctuation-enhanced sensing. Such patterns can be considered as binary "fingerprints" of odors. The <span class="hlt">method</span> has experimentally been demonstrated with a commercial semiconducting metal oxide (<span class="hlt">Taguchi</span>) sensor exposed to bacterial odors (Escherichia coli and Anthrax-surrogate Bacillus subtilis) and processing their stochastic signals. With a single <span class="hlt">Taguchi</span> sensor, the situations of empty chamber, tryptic soy agar (TSA) medium, or TSA with bacteria could be distinguished with 100% reproducibility. The bacterium numbers were in the range of 2.5 × 10(4)-10(6). To illustrate the relevance for ultra-low power consumption, we show that this new type of signal processing and pattern recognition task can be implemented by a simple analog circuitry and a few logic gates with total power consumption in the microWatts range.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3488443','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3488443"><span>A new feature extraction <span class="hlt">method</span> for signal classification <span class="hlt">applied</span> to cord dorsum potentials detection</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.</p> <p>2012-01-01</p> <p>In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, <span class="hlt">applied</span> to CDP detection. The <span class="hlt">method</span> is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other <span class="hlt">methods</span>. PMID:22929924</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22929924','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22929924"><span>A new feature extraction <span class="hlt">method</span> for signal classification <span class="hlt">applied</span> to cord dorsum potential detection.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P</p> <p>2012-10-01</p> <p>In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, <span class="hlt">applied</span> to CDP detection. The <span class="hlt">method</span> is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006EJASP2007..307A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006EJASP2007..307A"><span><span class="hlt">Applying</span> Novel Time-Frequency Moments Singular Value Decomposition <span class="hlt">Method</span> and Artificial Neural Networks for Ballistocardiography</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo</p> <p>2006-12-01</p> <p>As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction <span class="hlt">method</span> which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new <span class="hlt">method</span>, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering <span class="hlt">methods</span>. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction <span class="hlt">methods</span> such as wavelet transforms. To evaluate TFM-SVD, we <span class="hlt">applied</span> this new <span class="hlt">method</span> and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the <span class="hlt">method</span> has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4028787','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4028787"><span>Review of <span class="hlt">methods</span> used by chiropractors to determine the site for <span class="hlt">applying</span> manipulation</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background With the development of increasing evidence for the use of manipulation in the management of musculoskeletal conditions, there is growing interest in identifying the appropriate indications for care. Recently, attempts have been made to develop clinical prediction rules, however the validity of these clinical prediction rules remains unclear and their impact on care delivery has yet to be established. The current study was designed to evaluate the literature on the validity and reliability of the more common <span class="hlt">methods</span> used by doctors of chiropractic to inform the choice of the site at which to <span class="hlt">apply</span> spinal manipulation. <span class="hlt">Methods</span> Structured searches were conducted in Medline, PubMed, CINAHL and ICL, supported by hand searches of archives, to identify studies of the diagnostic reliability and validity of common <span class="hlt">methods</span> used to identify the site of treatment application. To be included, studies were to present original data from studies of human subjects and be designed to address the region or location of care delivery. Only English language manuscripts from peer-reviewed journals were included. The quality of evidence was ranked using QUADAS for validity and QAREL for reliability, as appropriate. Data were extracted and synthesized, and were evaluated in terms of strength of evidence and the degree to which the evidence was favourable for clinical use of the <span class="hlt">method</span> under investigation. Results A total of 2594 titles were screened from which 201 articles met all inclusion criteria. The spectrum of manuscript quality was quite broad, as was the degree to which the evidence favoured clinical application of the diagnostic <span class="hlt">methods</span> reviewed. The most convincing favourable evidence was for <span class="hlt">methods</span> which confirmed or provoked pain at a specific spinal segmental level or region. There was also high quality evidence supporting the use, with limitations, of static and motion palpation, and measures of leg length inequality. Evidence of mixed quality supported the use</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970004929','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970004929"><span>Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wu, K. Chauncey</p> <p>1995-01-01</p> <p>An efficient <span class="hlt">method</span> is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present <span class="hlt">method</span> requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. <span class="hlt">Taguchi</span> <span class="hlt">methods</span> are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using <span class="hlt">Taguchi</span> <span class="hlt">methods</span> are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28788111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28788111"><span>Fabrication of an Optical Fiber Micro-Sphere with a Diameter of Several Tens of Micrometers.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yu, Huijuan; Huang, Qiangxian; Zhao, Jian</p> <p>2014-06-25</p> <p>A new <span class="hlt">method</span> to fabricate an integrated optical fiber micro-sphere with a diameter within 100 µm, based on the optical fiber tapering technique and the <span class="hlt">Taguchi</span> <span class="hlt">method</span> is proposed. Using a 125 µm diameter single-mode (SM) optical fiber, an optical fiber taper with a cone angle is formed with the tapering technique, and the fabrication optimization of a micro-sphere with a diameter of less than 100 µm is achieved using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The optimum combination of process factors levels is obtained, and the signal-to-noise ratio (SNR) of three quality evaluation parameters and the significance of each process factors influencing them are selected as the two standards. Using the minimum zone <span class="hlt">method</span> (MZM) to evaluate the quality of the fabricated optical fiber micro-sphere, a three-dimensional (3D) numerical fitting image of its surface profile and the true sphericity are subsequently realized. From the results, an optical fiber micro-sphere with a two-dimensional (2D) diameter less than 80 µm, 2D roundness error less than 0.70 µm, 2D offset distance between the micro-sphere center and the fiber stylus central line less than 0.65 µm, and true sphericity of about 0.5 µm, is fabricated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26553956','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26553956"><span><span class="hlt">Method</span> developments approaches in supercritical fluid chromatography <span class="hlt">applied</span> to the analysis of cosmetics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lesellier, E; Mith, D; Dubrulle, I</p> <p>2015-12-04</p> <p> necessary, two-step gradient elution. The developed <span class="hlt">methods</span> were then <span class="hlt">applied</span> to real cosmetic samples to assess the <span class="hlt">method</span> specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19980201048','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19980201048"><span>Boundary and Interface Conditions for High Order Finite Difference <span class="hlt">Methods</span> <span class="hlt">Applied</span> to the Euler and Navier-Strokes Equations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nordstrom, Jan; Carpenter, Mark H.</p> <p>1998-01-01</p> <p>Boundary and interface conditions for high order finite difference <span class="hlt">methods</span> <span class="hlt">applied</span> to the constant coefficient Euler and Navier-Stokes equations are derived. The boundary conditions lead to strict and strong stability. The interface conditions are stable and conservative even if the finite difference operators and mesh sizes vary from domain to domain. Numerical experiments show that the new conditions also lead to good results for the corresponding nonlinear problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23531405','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23531405"><span>Validation of various adaptive threshold <span class="hlt">methods</span> of segmentation <span class="hlt">applied</span> to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene</p> <p>2013-03-25</p> <p>The comparative study of the results of various segmentation <span class="hlt">methods</span> for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold <span class="hlt">methods</span> of segmentation: the Niblack <span class="hlt">method</span>, the Sauvola <span class="hlt">method</span>, the White <span class="hlt">method</span>, the Bernsen <span class="hlt">method</span>, the Yasuda <span class="hlt">method</span> and the Palumbo <span class="hlt">method</span>, are calculated. <span class="hlt">Methods</span> are <span class="hlt">applied</span> to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis <span class="hlt">method</span> in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold <span class="hlt">methods</span> <span class="hlt">applied</span> to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: <span class="hlt">method</span> and type of image for which this <span class="hlt">method</span> is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola <span class="hlt">methods</span> results are better than the results of the rest of the <span class="hlt">methods</span> for all types of monochromatic images. All three <span class="hlt">methods</span> segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White <span class="hlt">methods</span> is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen <span class="hlt">methods</span> while the Sauvola <span class="hlt">method</span> achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola <span class="hlt">method</span> selected objects are segmented without</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APS..DFD.L4003C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APS..DFD.L4003C"><span>Kinetics-based phase change approach for VOF <span class="hlt">method</span> <span class="hlt">applied</span> to boiling flow</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cifani, Paolo; Geurts, Bernard; Kuerten, Hans</p> <p>2014-11-01</p> <p>Direct numerical simulations of boiling flows are performed to better understand the interaction of boiling phenomena with turbulence. The multiphase flow is simulated by solving a single set of equations for the whole flow field according to the one-fluid formulation, using a VOF interface capturing <span class="hlt">method</span>. Interface terms, related to surface tension, interphase mass transfer and latent heat, are added at the phase boundary. The mass transfer rate across the interface is derived from kinetic theory and subsequently coupled with the continuum representation of the flow field. The numerical model was implemented in OpenFOAM and validated against 3 cases: evaporation of a spherical uniformly heated droplet, growth of a spherical bubble in a superheated liquid and two dimensional film boiling. The computational model will be used to investigate the change in turbulence intensity in a fully developed channel flow due to interaction with boiling heat and mass transfer. In particular, we will focus on the influence of the vapor bubble volume fraction on enhancing heat and mass transfer. Furthermore, we will investigate kinetic energy spectra in order to identify the dynamics associated with the wakes of vapor bubbles. Department of <span class="hlt">Applied</span> Mathematics, 7500 AE Enschede, NL.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EnOp...46.1269X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EnOp...46.1269X"><span>An effective hybrid immune algorithm for solving the distributed permutation flow-shop scheduling problem</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min</p> <p>2014-09-01</p> <p>In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding <span class="hlt">method</span> is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also <span class="hlt">applied</span> within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the <span class="hlt">Taguchi</span> <span class="hlt">method</span> of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent <span class="hlt">methods</span>. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70013459','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70013459"><span>ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT <span class="hlt">METHOD</span> <span class="hlt">APPLIED</span> TO SOIL FREEZING.</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Hromadka, T.V.; Guymon, G.L.</p> <p>1985-01-01</p> <p>An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to <span class="hlt">apply</span> to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element <span class="hlt">method</span>. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100035150','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100035150"><span>Higher Order, Hybrid BEM/FEM <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Antenna Modeling</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fink, P. W.; Wilton, D. R.; Dobbins, J. A.</p> <p>2002-01-01</p> <p>In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several <span class="hlt">methods</span> for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, <span class="hlt">applied</span> to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation <span class="hlt">applied</span> to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=strategic+AND+planning+AND+process&pg=5&id=EJ789520','ERIC'); return false;" href="https://eric.ed.gov/?q=strategic+AND+planning+AND+process&pg=5&id=EJ789520"><span><span class="hlt">Applying</span> Mixed <span class="hlt">Methods</span> Techniques in Strategic Planning</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Voorhees, Richard A.</p> <p>2008-01-01</p> <p>In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed <span class="hlt">methods</span>, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=science+AND+information&pg=4&id=ED564953','ERIC'); return false;" href="https://eric.ed.gov/?q=science+AND+information&pg=4&id=ED564953"><span><span class="hlt">Applying</span> Human Computation <span class="hlt">Methods</span> to Information Science</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Harris, Christopher Glenn</p> <p>2013-01-01</p> <p>Human Computation <span class="hlt">methods</span> such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JMMM..446..231F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JMMM..446..231F"><span>Non-destructive scanning for <span class="hlt">applied</span> stress by the continuous magnetic Barkhausen noise <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Franco Grijalba, Freddy A.; Padovese, L. R.</p> <p>2018-01-01</p> <p>This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect <span class="hlt">applied</span> stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, <span class="hlt">applied</span> magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect <span class="hlt">applied</span> stress profiles.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MCM....54...99M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MCM....54...99M"><span>An <span class="hlt">Applied</span> <span class="hlt">Method</span> for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.</p> <p>2018-03-01</p> <p>The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original <span class="hlt">applied</span> <span class="hlt">method</span> ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the <span class="hlt">method</span> is its applicability in terms of computing resources and the set of initial data required. The results of application of the <span class="hlt">method</span> to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title32-vol1/pdf/CFR-2014-title32-vol1-sec37-1220.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title32-vol1/pdf/CFR-2014-title32-vol1-sec37-1220.pdf"><span>32 CFR 37.1220 - <span class="hlt">Applied</span> research.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-07-01</p> <p>... 32 National Defense 1 2014-07-01 2014-07-01 false <span class="hlt">Applied</span> research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 <span class="hlt">Applied</span> research... technology such as new materials, devices, <span class="hlt">methods</span> and processes. It typically is funded in Research...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title32-vol1/pdf/CFR-2010-title32-vol1-sec37-1220.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title32-vol1/pdf/CFR-2010-title32-vol1-sec37-1220.pdf"><span>32 CFR 37.1220 - <span class="hlt">Applied</span> research.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-07-01</p> <p>... 32 National Defense 1 2010-07-01 2010-07-01 false <span class="hlt">Applied</span> research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 <span class="hlt">Applied</span> research... technology such as new materials, devices, <span class="hlt">methods</span> and processes. It typically is funded in Research...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title32-vol1/pdf/CFR-2013-title32-vol1-sec37-1220.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title32-vol1/pdf/CFR-2013-title32-vol1-sec37-1220.pdf"><span>32 CFR 37.1220 - <span class="hlt">Applied</span> research.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-07-01</p> <p>... 32 National Defense 1 2013-07-01 2013-07-01 false <span class="hlt">Applied</span> research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 <span class="hlt">Applied</span> research... technology such as new materials, devices, <span class="hlt">methods</span> and processes. It typically is funded in Research...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title32-vol1/pdf/CFR-2012-title32-vol1-sec37-1220.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title32-vol1/pdf/CFR-2012-title32-vol1-sec37-1220.pdf"><span>32 CFR 37.1220 - <span class="hlt">Applied</span> research.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-07-01</p> <p>... 32 National Defense 1 2012-07-01 2012-07-01 false <span class="hlt">Applied</span> research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 <span class="hlt">Applied</span> research... technology such as new materials, devices, <span class="hlt">methods</span> and processes. It typically is funded in Research...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title32-vol1/pdf/CFR-2011-title32-vol1-sec37-1220.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title32-vol1/pdf/CFR-2011-title32-vol1-sec37-1220.pdf"><span>32 CFR 37.1220 - <span class="hlt">Applied</span> research.</span></a></p> <p><a target="_blank" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-07-01</p> <p>... 32 National Defense 1 2011-07-01 2011-07-01 false <span class="hlt">Applied</span> research. 37.1220 Section 37.1220... REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Definitions of Terms Used in This Part § 37.1220 <span class="hlt">Applied</span> research... technology such as new materials, devices, <span class="hlt">methods</span> and processes. It typically is funded in Research...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29800353','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29800353"><span><span class="hlt">Applied</span> Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification <span class="hlt">Methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J</p> <p>2018-05-17</p> <p>The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification <span class="hlt">method</span> performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were <span class="hlt">applied</span> to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis <span class="hlt">methods</span> under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification <span class="hlt">methods</span>. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test <span class="hlt">method</span>. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis <span class="hlt">methods</span> and establish the most appropriate methodology for objective voice analysis in clinical practice.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930000238&hterms=creating&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dcreating','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930000238&hterms=creating&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dcreating"><span>Creating A Data Base For Design Of An Impeller</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Prueger, George H.; Chen, Wei-Chung</p> <p>1993-01-01</p> <p>Report describes use of <span class="hlt">Taguchi</span> <span class="hlt">method</span> of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27402980','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27402980"><span>A Mixed-<span class="hlt">Methods</span> Analysis in Assessing Students' Professional Development by <span class="hlt">Applying</span> an Assessment for Learning Approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peeters, Michael J; Vaidya, Varun A</p> <p>2016-06-25</p> <p>Objective. To describe an approach for assessing the Accreditation Council for Pharmacy Education's (ACPE) doctor of pharmacy (PharmD) Standard 4.4, which focuses on students' professional development. <span class="hlt">Methods</span>. This investigation used mixed <span class="hlt">methods</span> with triangulation of qualitative and quantitative data to assess professional development. Qualitative data came from an electronic developmental portfolio of professionalism and ethics, completed by PharmD students during their didactic studies. Quantitative confirmation came from the Defining Issues Test (DIT)-an assessment of pharmacists' professional development. Results. Qualitatively, students' development reflections described growth through this course series. Quantitatively, the 2015 PharmD class's DIT N2-scores illustrated positive development overall; the lower 50% had a large initial improvement compared to the upper 50%. Subsequently, the 2016 PharmD class confirmed these average initial improvements of students and also showed further substantial development among students thereafter. Conclusion. <span class="hlt">Applying</span> an assessment for learning approach, triangulation of qualitative and quantitative assessments confirmed that PharmD students developed professionally during this course series.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26340789','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26340789"><span>Neural-Dynamic-<span class="hlt">Method</span>-Based Dual-Arm CMG Scheme With Time-Varying Constraints <span class="hlt">Applied</span> to Humanoid Robots.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing</p> <p>2015-12-01</p> <p>We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic <span class="hlt">method</span>, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design <span class="hlt">method</span>, first, a cyclic-motion performance index is exploited and <span class="hlt">applied</span>. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/4231464','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/biblio/4231464"><span><span class="hlt">METHOD</span> OF <span class="hlt">APPLYING</span> COPPER COATINGS TO URANIUM</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Gray, A.G.</p> <p>1959-07-14</p> <p>A <span class="hlt">method</span> is presented for protecting metallic uranium, which comprises anodic etching of the uranium in an aqueous phosphoric acid solution containing chloride ions, cleaning the etched uranium in aqueous nitric acid solution, promptly electro-plating the cleaned uranium in a copper electro-plating bath, and then electro-plating thereupon lead, tin, zinc, cadmium, chromium or nickel from an aqueous electro-plating bath.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20597814','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20597814"><span>A novel system <span class="hlt">applying</span> the 2-deoxyglucose <span class="hlt">method</span> to fish for characterization of environmental neurotoxins.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Choich, J A; Sass, J B; Silbergeld, E K</p> <p>2002-01-01</p> <p><span class="hlt">Methods</span> of identifying and preventing ecotoxicity related to environmental stressors on wildlife species are underdeveloped. To detect sublethal effects, we have devised a neurochemical <span class="hlt">method</span> of evaluating environmental neurotoxins by a measuring changes in regional neural activity in the central nervous system of fish. Our system is a unique adaptation of the 2-deoxyglucose (2-DG) <span class="hlt">method</span> originally developed by L. Sokoloff in 1977, which is based on the direct relationship between glucose metabolism and neural functioning at the regional level. We <span class="hlt">applied</span> these concepts to test the assumption that changes in neural activity as a result of chemical exposure would produce measurable effects on the amount of [(14)C]2-DG accumulated regionally in the brain of Tilapia nilatica. For purposes of this study, we utilized the excitotoxin N-methyl-D-aspartate (NMDA) to characterize the response of the central nervous system. Regional accumulation of [(14)C]2-DG was visualized by autoradiography and digital image processing. Observable increases in regional [(14) C] 2-DG uptake were evident in all NMDA-treated groups as compared to controls. Specific areas of increased [(14)C] 2-DG uptake included the telencephalon, optic tectum, and regions of the cerebellum, all areas in which high concentrations of NMDA-subtype glutamate receptors have been found in Tilapia monsanbica. These results are consistent with the known neural excitatory action of NMDA.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17584519','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17584519"><span>Parametric design of pressure-relieving foot orthosis using statistics-based finite element <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cheung, Jason Tak-Man; Zhang, Ming</p> <p>2008-04-01</p> <p>Custom-molded foot orthoses are frequently prescribed in routine clinical practice to prevent or treat plantar ulcers in diabetes by reducing the peak plantar pressure. However, the design and fabrication of foot orthosis vary among clinical practitioners and manufacturers. Moreover, little information about the parametric effect of different combinations of design factors is available. As an alternative to the experimental approach, therefore, computational models of the foot and footwear can provide efficient evaluations of different combinations of structural and material design factors on plantar pressure distribution. In this study, a combined finite element and <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to identify the sensitivity of five design factors (arch type, insole and midsole thickness, insole and midsole stiffness) of foot orthosis on peak plantar pressure relief. From the FE predictions, the custom-molded shape was found to be the most important design factor in reducing peak plantar pressure. Besides the use of an arch-conforming foot orthosis, the insole stiffness was found to be the second most important factor for peak pressure reduction. Other design factors, such as insole thickness, midsole stiffness and midsole thickness, contributed to less important roles in peak pressure reduction in the given order. The statistics-based FE <span class="hlt">method</span> was found to be an effective approach in evaluating and optimizing the design of foot orthosis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26541560','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26541560"><span>The Application of Intensive Longitudinal <span class="hlt">Methods</span> to Investigate Change: Stimulating the Field of <span class="hlt">Applied</span> Family Research.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bamberger, Katharine T</p> <p>2016-03-01</p> <p>The use of intensive longitudinal <span class="hlt">methods</span> (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in <span class="hlt">applied</span> family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for <span class="hlt">applying</span> ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4755853','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4755853"><span>The Application of Intensive Longitudinal <span class="hlt">Methods</span> to Investigate Change: Stimulating the Field of <span class="hlt">Applied</span> Family Research</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bamberger, Katharine T.</p> <p>2015-01-01</p> <p>The use of intensive longitudinal <span class="hlt">methods</span> (ILM)—rapid in situ assessment at micro timescales—can be overlaid on RCTs and other study designs in <span class="hlt">applied</span> family research. Especially when done as part of a multiple timescale design—in bursts over macro timescales, ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for <span class="hlt">applying</span> ILM to intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based (rather than family-based) interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM. PMID:26541560</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=sol&pg=2&id=EJ1058652','ERIC'); return false;" href="https://eric.ed.gov/?q=sol&pg=2&id=EJ1058652"><span>Aiming for the Singing Teacher: An <span class="hlt">Applied</span> Study on Preservice Kindergarten Teachers' Singing Skills Development within a Music <span class="hlt">Methods</span> Course</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Neokleous, Rania</p> <p>2015-01-01</p> <p>This study examined the effects of a music <span class="hlt">methods</span> course offered at a Cypriot university on the singing skills of 33 female preservice kindergarten teachers. To systematically measure and analyze student progress, the research design was both experimental and descriptive. As an <span class="hlt">applied</span> study which was carried out "in situ," the normal…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840035138&hterms=factoring&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dfactoring','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840035138&hterms=factoring&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dfactoring"><span>An implict LU scheme for the Euler equations <span class="hlt">applied</span> to arbitrary cascades. [new <span class="hlt">method</span> of factoring</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Buratynski, E. K.; Caughey, D. A.</p> <p>1984-01-01</p> <p>An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and <span class="hlt">applied</span> to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU <span class="hlt">method</span> is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22317249','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22317249"><span>A <span class="hlt">method</span> for work modeling at complex systems: towards <span class="hlt">applying</span> information systems in family health care units.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques</p> <p>2012-01-01</p> <p>Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a <span class="hlt">method</span> for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a <span class="hlt">method</span> capable of providing a detailed and accurate vision of how people perform their tasks, in order to <span class="hlt">apply</span> information systems for supporting work in organizations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28427353','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28427353"><span>Assessing the impact of natural policy experiments on socioeconomic inequalities in health: how to <span class="hlt">apply</span> commonly used quantitative analytical <span class="hlt">methods</span>?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hu, Yannan; van Lenthe, Frank J; Hoffmann, Rasmus; van Hedel, Karen; Mackenbach, Johan P</p> <p>2017-04-20</p> <p>The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical <span class="hlt">methods</span> can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical <span class="hlt">methods</span> for the evaluation of NPEs can be <span class="hlt">applied</span> to quantify the effect of policies on health inequalities. We identified seven quantitative analytical <span class="hlt">methods</span> for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these <span class="hlt">methods</span> can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. All seven <span class="hlt">methods</span> can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. <span class="hlt">Methods</span> commonly used in economics and econometrics for the evaluation of NPEs can also be <span class="hlt">applied</span> to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these <span class="hlt">methods</span> less desirable for assessing policy effects on population-level health inequalities. Increased use of the</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018RScI...89b5104Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018RScI...89b5104Y"><span>Optimized lighting <span class="hlt">method</span> of <span class="hlt">applying</span> shaped-function signal for increasing the dynamic range of LED-multispectral imaging system</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling</p> <p>2018-02-01</p> <p>This paper proposes an optimized lighting <span class="hlt">method</span> of <span class="hlt">applying</span> a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting <span class="hlt">method</span> is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square <span class="hlt">method</span> is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed <span class="hlt">method</span> were both significantly improved. The optimum <span class="hlt">method</span> opens up avenues for the hyperspectral imaging of biological tissue.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29495827','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29495827"><span>Optimized lighting <span class="hlt">method</span> of <span class="hlt">applying</span> shaped-function signal for increasing the dynamic range of LED-multispectral imaging system.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling</p> <p>2018-02-01</p> <p>This paper proposes an optimized lighting <span class="hlt">method</span> of <span class="hlt">applying</span> a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting <span class="hlt">method</span> is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square <span class="hlt">method</span> is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed <span class="hlt">method</span> were both significantly improved. The optimum <span class="hlt">method</span> opens up avenues for the hyperspectral imaging of biological tissue.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28788270','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28788270"><span>Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rahman, Mohd Nizam Ab; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A; Mahmood, Wan Mohd Faizal Wan</p> <p>2014-12-02</p> <p>The significant increase in metal costs has forced the electronics industry to provide new materials and <span class="hlt">methods</span> to reduce costs, while maintaining customers' high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and <span class="hlt">Taguchi</span> <span class="hlt">method</span> after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5456432','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5456432"><span>Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rahman, Mohd Nizam Ab.; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A.; Mahmood, Wan Mohd Faizal Wan</p> <p>2014-01-01</p> <p>The significant increase in metal costs has forced the electronics industry to provide new materials and <span class="hlt">methods</span> to reduce costs, while maintaining customers’ high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and <span class="hlt">Taguchi</span> <span class="hlt">method</span> after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s). PMID:28788270</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.3290P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.3290P"><span>Goal oriented soil mapping: <span class="hlt">applying</span> modern <span class="hlt">methods</span> supported by local knowledge: A review</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr</p> <p>2017-04-01</p> <p>In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil <span class="hlt">methods</span> incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: <span class="hlt">applying</span> modern <span class="hlt">methods</span> supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/1015971','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/1015971"><span>A GIS modeling <span class="hlt">method</span> <span class="hlt">applied</span> to predicting forest songbird habitat</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Dettmers, Randy; Bart, Jonathan</p> <p>1999-01-01</p> <p>We have developed an approach for using a??presencea?? data to construct habitat models. Presence data are those that indicate locations where the target organism is observed to occur, but that cannot be used to define locations where the organism does not occur. Surveys of highly mobile vertebrates often yield these kinds of data. Models developed through our approach yield predictions of the amount and the spatial distribution of good-quality habitat for the target species. This approach was developed primarily for use in a GIS context; thus, the models are spatially explicit and have the potential to be <span class="hlt">applied</span> over large areas. Our <span class="hlt">method</span> consists of two primary steps. In the first step, we identify an optimal range of values for each habitat variable to be used as a predictor in the model. To find these ranges, we employ the concept of maximizing the difference between cumulative distribution functions of (1) the values of a habitat variable at the observed presence locations of the target organism, and (2) the values of that habitat variable for all locations across a study area. In the second step, multivariate models of good habitat are constructed by combining these ranges of values, using the Boolean operators a??anda?? and a??or.a?? We use an approach similar to forward stepwise regression to select the best overall model. We demonstrate the use of this <span class="hlt">method</span> by developing species-specific habitat models for nine forest-breeding songbirds (e.g., Cerulean Warbler, Scarlet Tanager, Wood Thrush) studied in southern Ohio. These models are based on speciesa?? microhabitat preferences for moisture and vegetation characteristics that can be predicted primarily through the use of abiotic variables. We use slope, land surface morphology, land surface curvature, water flow accumulation downhill, and an integrated moisture index, in conjunction with a land-cover classification that identifies forest/nonforest, to develop these models. The performance of these</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19830025630&hterms=online+review&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Donline%2Breview','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19830025630&hterms=online+review&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Donline%2Breview"><span>A Review of System Identification <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Aircraft</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Klein, V.</p> <p>1983-01-01</p> <p>Airplane identification, equation error <span class="hlt">method</span>, maximum likelihood <span class="hlt">method</span>, parameter estimation in frequency domain, extended Kalman filter, aircraft equations of motion, aerodynamic model equations, criteria for the selection of a parsimonious model, and online aircraft identification are addressed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/fs/2009/3113/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/fs/2009/3113/"><span><span class="hlt">Applying</span> New <span class="hlt">Methods</span> to Diagnose Coral Diseases</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kellogg, Christina A.; Zawada, David G.</p> <p>2009-01-01</p> <p>Coral disease, one of the major causes of reef degradation and coral death, has been increasing worldwide since the 1970s, particularly in the Caribbean. Despite increased scientific study, simple questions about the extent of disease outbreaks and the causative agents remain unanswered. A component of the U.S. Geological Survey Coral Reef Ecosystem STudies (USGS CREST) project is focused on developing and using new <span class="hlt">methods</span> to approach the complex problem of coral disease.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AIPC.1567..880K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AIPC.1567..880K"><span>Roll forming of eco-friendly stud</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Keum, Y. T.; Lee, S. Y.; Lee, T. H.; Sim, J. K.</p> <p>2013-12-01</p> <p>In order to manufacture an eco-friendly stud, the sheared pattern is designed by the <span class="hlt">Taguchi</span> <span class="hlt">method</span> and expanded by the side rolls. The seven geometrical shape of sheared pattern are considered in the structural and thermal analyses to select the best functional one in terms of the durability and fire resistance of dry wall. For optimizing the size of the sheared pattern chosen, the L9 orthogonal array and smaller-the-better characteristics of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> are used. As the roll gap causes forming defects when the upper-and-lower roll type is adopted for expanding the sheared pattern, the side roll type is introduced. The stress and strain distributions obtained by the FEM simulation of roll-forming processes are utilized for the design of expanding process. The expanding process by side rolls shortens the length of expanding process and minimizes the cost of dies. Furthermore, the stud manufactured by expanding the sheared pattern of the web is an eco-friend because of the scrapless roll-forming process. In addition, compared to the conventionally roll-formed stud, the material cost is lessened about 13.6% and the weight is lightened about 15.5%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3656801','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3656801"><span>Validation of various adaptive threshold <span class="hlt">methods</span> of segmentation <span class="hlt">applied</span> to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>The comparative study of the results of various segmentation <span class="hlt">methods</span> for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold <span class="hlt">methods</span> of segmentation: the Niblack <span class="hlt">method</span>, the Sauvola <span class="hlt">method</span>, the White <span class="hlt">method</span>, the Bernsen <span class="hlt">method</span>, the Yasuda <span class="hlt">method</span> and the Palumbo <span class="hlt">method</span>, are calculated. <span class="hlt">Methods</span> are <span class="hlt">applied</span> to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis <span class="hlt">method</span> in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold <span class="hlt">methods</span> <span class="hlt">applied</span> to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: <span class="hlt">method</span> and type of image for which this <span class="hlt">method</span> is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola <span class="hlt">methods</span> results are better than the results of the rest of the <span class="hlt">methods</span> for all types of monochromatic images. All three <span class="hlt">methods</span> segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White <span class="hlt">methods</span> is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen <span class="hlt">methods</span> while the Sauvola <span class="hlt">method</span> achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola <span class="hlt">method</span> selected objects are segmented without</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JChPh.134d5104C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JChPh.134d5104C"><span>A Hamiltonian replica exchange <span class="hlt">method</span> for building protein-protein interfaces <span class="hlt">applied</span> to a leucine zipper</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cukier, Robert I.</p> <p>2011-01-01</p> <p>Leucine zippers consist of alpha helical monomers dimerized (or oligomerized) into alpha superhelical structures known as coiled coils. Forming the correct interface of a dimer from its monomers requires an exploration of configuration space focused on the side chains of one monomer that must interdigitate with sites on the other monomer. The aim of this work is to generate good interfaces in short simulations starting from separated monomers. <span class="hlt">Methods</span> are developed to accomplish this goal based on an extension of a previously introduced [Su and Cukier, J. Phys. Chem. B 113, 9595, (2009)] Hamiltonian temperature replica exchange <span class="hlt">method</span> (HTREM), which scales the Hamiltonian in both potential and kinetic energies that was used for the simulation of dimer melting curves. The new <span class="hlt">method</span>, HTREM_MS (MS designates mean square), focused on interface formation, adds restraints to the Hamiltonians for all but the physical system, which is characterized by the normal molecular dynamics force field at the desired temperature. The restraints in the nonphysical systems serve to prevent the monomers from separating too far, and have the dual aims of enhancing the sampling of close in configurations and breaking unwanted correlations in the restrained systems. The <span class="hlt">method</span> is <span class="hlt">applied</span> to a 31-residue truncation of the 33-residue leucine zipper (GCN4-p1) of the yeast transcriptional activator GCN4. The monomers are initially separated by a distance that is beyond their capture length. HTREM simulations show that the monomers oscillate between dimerlike and monomerlike configurations, but do not form a stable interface. HTREM_MS simulations result in the dimer interface being faithfully reconstructed on a 2 ns time scale. A small number of systems (one physical and two restrained with modified potentials and higher effective temperatures) are sufficient. An in silico mutant that should not dimerize because it lacks charged residues that provide electrostatic stabilization of the dimer</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PrAeS..53....1T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PrAeS..53....1T"><span>Review of hardware cost estimation <span class="hlt">methods</span>, models and tools <span class="hlt">applied</span> to early phases of space mission planning</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Trivailo, O.; Sippel, M.; Şekercioğlu, Y. A.</p> <p>2012-08-01</p> <p>The primary purpose of this paper is to review currently existing cost estimation <span class="hlt">methods</span>, models, tools and resources applicable to the space sector. While key space sector <span class="hlt">methods</span> are outlined, a specific focus is placed on hardware cost estimation on a system level, particularly for early mission phases during which specifications and requirements are not yet crystallised, and information is limited. For the space industry, cost engineering within the systems engineering framework is an integral discipline. The cost of any space program now constitutes a stringent design criterion, which must be considered and carefully controlled during the entire program life cycle. A first step to any program budget is a representative cost estimate which usually hinges on a particular estimation approach, or methodology. Therefore appropriate selection of specific cost models, <span class="hlt">methods</span> and tools is paramount, a difficult task given the highly variable nature, scope as well as scientific and technical requirements applicable to each program. Numerous <span class="hlt">methods</span>, models and tools exist. However new ways are needed to address very early, pre-Phase 0 cost estimation during the initial program research and establishment phase when system specifications are limited, but the available research budget needs to be established and defined. Due to their specificity, for vehicles such as reusable launchers with a manned capability, a lack of historical data implies that using either the classic heuristic approach such as parametric cost estimation based on underlying CERs, or the analogy approach, is therefore, by definition, limited. This review identifies prominent cost estimation models <span class="hlt">applied</span> to the space sector, and their underlying cost driving parameters and factors. Strengths, weaknesses, and suitability to specific mission types and classes are also highlighted. Current approaches which strategically amalgamate various cost estimation strategies both for formulation and validation</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3534452','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3534452"><span><span class="hlt">Applying</span> knowledge-anchored hypothesis discovery <span class="hlt">methods</span> to advance clinical and translational research: the OAMiner project</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N</p> <p>2012-01-01</p> <p>The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable <span class="hlt">methods</span> for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in <span class="hlt">applying</span> that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22212688-proceedings-international-conference-mathematics-computational-methods-applied-nuclear-science-engineering','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22212688-proceedings-international-conference-mathematics-computational-methods-applied-nuclear-science-engineering"><span>Proceedings of the 2013 International Conference on Mathematics and Computational <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Nuclear Science and Engineering - M and C 2013</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>NONE</p> <p>2013-07-01</p> <p>The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational <span class="hlt">Methods</span> <span class="hlt">Applied</span> to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical <span class="hlt">methods</span>; algorithms for advanced architectures; and validation verification, and uncertainty quantification.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ISPAr.XL5b.143C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ISPAr.XL5b.143C"><span>a 3d GIS <span class="hlt">Method</span> <span class="hlt">Applied</span> to Cataloging and Restoring: the Case of Aurelian Walls at Rome</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Canciani, M.; Ceniccola, V.; Messi, M.; Saccone, M.; Zampilli, M.</p> <p>2013-07-01</p> <p>The project involves architecture, archaeology, restoration, graphic documentation and computer imaging. The objective is development of a <span class="hlt">method</span> for documentation of an architectural feature, based on a three-dimensional model obtained through laser scanning technologies, linked to a database developed in GIS environment. The case study concerns a short section of Rome's Aurelian walls, including the Porta Latina. The city walls are Rome's largest single architectural monument, subject to continuous deterioration, modification and maintenance since their original construction beginning in 271 AD. The documentation system provides a flexible, precise and easily-<span class="hlt">applied</span> instrument for recording the full appearance, materials, stratification palimpsest and conservation status, in order to identify restoration criteria and intervention priorities, and to monitor and control the use and conservation of the walls over time. The project began with an analysis and documentation campaign integrating direct, traditional recording <span class="hlt">methods</span> with indirect, topographic instrument and 3D laser scanning recording. These recording systems permitted development of a geographic information system based on three-dimensional modelling of separate, individual elements, linked to a database and related to the various stratigraphic horizons, the construction techniques, the component materials and their state of degradation. The investigations of the extant wall fabric were further compared to historic documentation, from both graphic and descriptive sources. The resulting model constitutes the core of the GIS system for this specific monument. The methodology is notable for its low cost, precision, practicality and thoroughness, and can be <span class="hlt">applied</span> to the entire Aurelian wall and to other monuments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.14015024M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.14015024M"><span><span class="hlt">Methods</span> of parallel computation <span class="hlt">applied</span> on granular simulations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Martins, Gustavo H. B.; Atman, Allbens P. F.</p> <p>2017-06-01</p> <p>Every year, parallel computing has becoming cheaper and more accessible. As consequence, applications were spreading over all research areas. Granular materials is a promising area for parallel computing. To prove this statement we study the impact of parallel computing in simulations of the BNE (Brazil Nut Effect). This property is due the remarkable arising of an intruder confined to a granular media when vertically shaken against gravity. By means of DEM (Discrete Element <span class="hlt">Methods</span>) simulations, we study the code performance testing different <span class="hlt">methods</span> to improve clock time. A comparison between serial and parallel algorithms, using OpenMP® is also shown. The best improvement was obtained by optimizing the function that find contacts using Verlet's cells.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1943b0061C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1943b0061C"><span>Effect of processing parameters on FDM process</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chari, V. Srinivasa; Venkatesh, P. R.; Krupashankar, Dinesh, Veena</p> <p>2018-04-01</p> <p>This paper focused on the process parameters on fused deposition modeling (FDM). Infill, resolution, temperature are the process variables considered for experimental studies. Compression strength, Hardness test microstructure are the outcome parameters, this experimental study done based on the <span class="hlt">taguchi</span>'s L9 orthogonal array is used. <span class="hlt">Taguchi</span> array used to build the 9 different models and also to get the effective output results on the under taken parameters. The material used for this experimental study is Polylactic Acid (PLA).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/867910','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/867910"><span>Metal alloy coatings and <span class="hlt">methods</span> for <span class="hlt">applying</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Merz, Martin D.; Knoll, Robert W.</p> <p>1991-01-01</p> <p>A <span class="hlt">method</span> of coating a substrate comprises plasma spraying a prealloyed feed powder onto a substrate, where the prealloyed feed powder comprises a significant amount of an alloy of stainless steel and at least one refractory element selected from the group consisting of titanium, zirconium, hafnium, niobium, tantalum, molybdenum, and tungsten. The plasma spraying of such a feed powder is conducted in an oxygen containing atmosphere and forms an adherent, corrosion resistant, and substantially homogenous metallic refractory alloy coating on the substrate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011OptLE..49..632G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011OptLE..49..632G"><span>Optical <span class="hlt">method</span> of caustics <span class="hlt">applied</span> in viscoelastic fracture analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, Guiyun; Li, Zheng; Xu, Jie</p> <p>2011-05-01</p> <p>The optical <span class="hlt">method</span> of caustics is developed here to study the fracture of viscoelastic materials. By adopting a distribution of viscoelastic stress fields near the crack tip, the <span class="hlt">method</span> of caustics is used to determine the viscoelastic fracture parameters from the caustic patterns near the crack tip. Two viscoelastic materials are studied. These are PMMA and ternary composites of HDPE/POE-g-MA/CaCO 3. The transmitted and reflective <span class="hlt">methods</span> of caustics are performed separately to investigate viscoelastic fracture behaviors. The stress intensity factors (SIFs) versus time is determined by a series of shadow spot patterns combined with viscoelastic parameters evaluated by creep tests. In order to understand the viscoelastic fracture mechanisms of HDPE/POE-g-MA/CaCO 3 composites, their fracture surfaces are observed by a Scanning Electron Microscope (SEM). The results indicate that the <span class="hlt">method</span> of caustics can be used to characterize the fracture behaviors of viscoelastic materials and further to optimize the design of polymer composites.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A13H2198R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A13H2198R"><span><span class="hlt">Applying</span> the recurrence quantification analysis <span class="hlt">method</span> for analyzing the recurrence of simulated multiple African easterly waves in 2006</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Reyes, T.; Shen, B. W.; Wu, Y.; Faghih-Naini, S.; Li, J.</p> <p>2017-12-01</p> <p>In late August, 2006, six African easterly waves (AEWs) appeared sequentially over the African continent during a 30-day period. With a global model of 1/4 degree resolution, statistics of these AEWs were realistically captured. More interestingly, the formation, subsequent intensification, and movement of Hurricane Helene (2006) were simulated to a degree of satisfaction during the model integration from Day 22 to 30 (Shen et al., 2010). We then developed a parallel ensemble empirical mode decomposition <span class="hlt">method</span> (PEEMD; Shen et al. 2012; 2017; Cheung et al. 2013) to reveal the role of downscaling processes associated with the environmental flows in determining the timing and location of Helene's formation (Wu and Shen, 2016), supporting its practical predictability at extended-range time scales. Recently, further analysis of the correlation coefficients (CCs) between the simulated temperature and reanalysis data showed that CCs are above 0.65 during the 30 day simulations but display oscillations. While high CCs are consistent with the accurate simulations of the AEWs and Hurricane Helene, oscillations may indicate the inaccurate simulations of moving speeds (i.e., an inaccurate phase) as compared to observations. The observed AEWs have comparable but slightly different periods. To quantitatively examine this space-varying feature in observations and the temporal oscillations in the CCs of the simulations, we select recurrence quantification analysis (RQA) <span class="hlt">methods</span> and the recurrence plot (RP) in order to account for the local nature of these features. A recurrence is defined when the trajectory returns back to the neighborhood of a previously visited state. With the RQA <span class="hlt">methods</span>, we can compute the "recurrence rate" and "determinism" present in the RP in order to reveal the degree of recurrence and determinism (or "predictability") of the recurrent solutions. To verify of our implementations in Python, we <span class="hlt">applied</span> our <span class="hlt">methods</span> to analyze idealized solutions (e</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..1215526P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..1215526P"><span>Gliding Box <span class="hlt">method</span> <span class="hlt">applied</span> to trace element distribution of a geochemical data set</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana</p> <p>2010-05-01</p> <p>The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb < As < Au. Significant correlation coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' <span class="hlt">method</span> for implementing multifractal analysis. The partitioning <span class="hlt">method</span> <span class="hlt">applied</span> in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26494010','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26494010"><span><span class="hlt">Applying</span> systematic review search <span class="hlt">methods</span> to the grey literature: a case study examining guidelines for school-based breakfast programs in Canada.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Godin, Katelyn; Stapleton, Jackie; Kirkpatrick, Sharon I; Hanning, Rhona M; Leatherdale, Scott T</p> <p>2015-10-22</p> <p>Grey literature is an important source of information for large-scale review syntheses. However, there are many characteristics of grey literature that make it difficult to search systematically. Further, there is no 'gold standard' for rigorous systematic grey literature search <span class="hlt">methods</span> and few resources on how to conduct this type of search. This paper describes systematic review search <span class="hlt">methods</span> that were developed and <span class="hlt">applied</span> to complete a case study systematic review of grey literature that examined guidelines for school-based breakfast programs in Canada. A grey literature search plan was developed to incorporate four different searching strategies: (1) grey literature databases, (2) customized Google search engines, (3) targeted websites, and (4) consultation with contact experts. These complementary strategies were used to minimize the risk of omitting relevant sources. Since abstracts are often unavailable in grey literature documents, items' abstracts, executive summaries, or table of contents (whichever was available) were screened. Screening of publications' full-text followed. Data were extracted on the organization, year published, who they were developed by, intended audience, goal/objectives of document, sources of evidence/resources cited, meals mentioned in the guidelines, and recommendations for program delivery. The search strategies for identifying and screening publications for inclusion in the case study review was found to be manageable, comprehensive, and intuitive when <span class="hlt">applied</span> in practice. The four search strategies of the grey literature search plan yielded 302 potentially relevant items for screening. Following the screening process, 15 publications that met all eligibility criteria remained and were included in the case study systematic review. The high-level findings of the case study systematic review are briefly described. This article demonstrated a feasible and seemingly robust <span class="hlt">method</span> for <span class="hlt">applying</span> systematic search strategies to</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/4206339','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/biblio/4206339"><span><span class="hlt">METHOD</span> OF <span class="hlt">APPLYING</span> NICKEL COATINGS ON URANIUM</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Gray, A.G.</p> <p>1959-07-14</p> <p>A <span class="hlt">method</span> is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1943b0064B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1943b0064B"><span>Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.</p> <p>2018-04-01</p> <p>Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using <span class="hlt">Taguchi</span> optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to <span class="hlt">Taguchi</span> L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using <span class="hlt">Taguchi</span> technique for cutting forces and surface roughness. Using <span class="hlt">Taguchi</span> technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMNS41A1900C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMNS41A1900C"><span>Improved Cluster <span class="hlt">Method</span> <span class="hlt">Applied</span> to the InSAR data of the 2007 Piton de la Fournaise eruption</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cayol, V.; Augier, A.; Froger, J. L.; Menassian, S.</p> <p>2016-12-01</p> <p>Interpretation of surface displacement induced by reservoirs, whether magmatic, hydrothermal or gaseous, can be done at reduced numerical cost and with little a priori knowledge using cluster <span class="hlt">methods</span>, where reservoirs are represented by point sources embedded in an elastic half-space. Most of the time, the solution representing the best trade-off between the data fit and the model smoothness (L-curve criterion) is chosen. This study relies on synthetic tests to improve cluster <span class="hlt">methods</span> in several ways. Firstly, to solve problems involving steep topographies, we construct unit sources numerically. Secondly, we show that the L-curve criterion leads to several plausible solutions where the most realistic are not necessarily the best fitting. We determine that the cross-validation <span class="hlt">method</span>, with data geographically grouped, is a more reliable way to determine the solution. Thirdly, we propose a new <span class="hlt">method</span>, based on source ranking according to their contribution and minimization of the Akaike information criteria, to retrieve reservoirs' geometry more accurately and to better reflect information contained in the data. We show that the solution is robust in the presence of correlated noise and that reservoir complexity that can be retrieved decreases with increasing noise. We also show that it is inappropriate to use cluster <span class="hlt">methods</span> for pressurized fractures. Finally, the <span class="hlt">method</span> is <span class="hlt">applied</span> to the summit deflation recorded by InSAR after the caldera collapse which occurred at Piton de la Fournaise in April 2007. Comparison with other data indicate that the deflation is probably related to poro-elastic compaction and fluid flow subsequent to the crater collapse.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JHyd..533..343Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JHyd..533..343Z"><span><span class="hlt">Applying</span> a weighted random forests <span class="hlt">method</span> to extract karst sinkholes from LiDAR data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhu, Junfeng; Pierskalla, William P.</p> <p>2016-02-01</p> <p>Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we <span class="hlt">applied</span> the random forests, a machine learning <span class="hlt">method</span>, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests <span class="hlt">method</span> was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests <span class="hlt">method</span> cannot totally replace manual procedures, such as visual inspection and field verification.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25107866','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25107866"><span><span class="hlt">Applying</span> quantitative benefit-risk analysis to aid regulatory decision making in diagnostic imaging: <span class="hlt">methods</span>, challenges, and opportunities.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Agapova, Maria; Devine, Emily Beth; Bresnahan, Brian W; Higashi, Mitchell K; Garrison, Louis P</p> <p>2014-09-01</p> <p>Health agencies making regulatory marketing-authorization decisions use qualitative and quantitative approaches to assess expected benefits and expected risks associated with medical interventions. There is, however, no universal standard approach that regulatory agencies consistently use to conduct benefit-risk assessment (BRA) for pharmaceuticals or medical devices, including for imaging technologies. Economics, health services research, and health outcomes research use quantitative approaches to elicit preferences of stakeholders, identify priorities, and model health conditions and health intervention effects. Challenges to BRA in medical devices are outlined, highlighting additional barriers in radiology. Three quantitative <span class="hlt">methods</span>--multi-criteria decision analysis, health outcomes modeling and stated-choice survey--are assessed using criteria that are important in balancing benefits and risks of medical devices and imaging technologies. To be useful in regulatory BRA, quantitative <span class="hlt">methods</span> need to: aggregate multiple benefits and risks, incorporate qualitative considerations, account for uncertainty, and make clear whose preferences/priorities are being used. Each quantitative <span class="hlt">method</span> performs differently across these criteria and little is known about how BRA estimates and conclusions vary by approach. While no specific quantitative <span class="hlt">method</span> is likely to be the strongest in all of the important areas, quantitative <span class="hlt">methods</span> may have a place in BRA of medical devices and radiology. Quantitative BRA approaches have been more widely <span class="hlt">applied</span> in medicines, with fewer BRAs in devices. Despite substantial differences in characteristics of pharmaceuticals and devices, BRA <span class="hlt">methods</span> may be as applicable to medical devices and imaging technologies as they are to pharmaceuticals. Further research to guide the development and selection of quantitative BRA <span class="hlt">methods</span> for medical devices and imaging technologies is needed. Copyright © 2014 AUR. Published by Elsevier Inc. All rights</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27183251','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27183251"><span>Postgraduate Education in Quality Improvement <span class="hlt">Methods</span>: Initial Results of the Fellows' <span class="hlt">Applied</span> Quality Training (FAQT) Curriculum.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp</p> <p>2016-06-01</p> <p>Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI <span class="hlt">methods</span>. We created the Fellows <span class="hlt">Applied</span> Quality Training (FAQT) curriculum for cardiology fellows using both didactic and <span class="hlt">applied</span> components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the <span class="hlt">applied</span> component of the curriculum, with no significant change after the didactic component.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1164300.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1164300.pdf"><span><span class="hlt">Applied</span> Epistemology and Understanding in Information Studies</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gorichanaz, Tim</p> <p>2017-01-01</p> <p>Introduction: <span class="hlt">Applied</span> epistemology allows information studies to benefit from developments in philosophy. In information studies, epistemic concepts are rarely considered in detail. This paper offers a review of several epistemic concepts, focusing on understanding, as a call for further work in <span class="hlt">applied</span> epistemology in information studies. <span class="hlt">Method</span>:…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840011117','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840011117"><span>Optimization <span class="hlt">methods</span> <span class="hlt">applied</span> to hybrid vehicle design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Donoghue, J. F.; Burghart, J. H.</p> <p>1983-01-01</p> <p>The use of optimization <span class="hlt">methods</span> as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization <span class="hlt">methods</span> provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020015800','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020015800"><span>System Identification and POD <span class="hlt">Method</span> <span class="hlt">Applied</span> to Unsteady Aerodynamics</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tang, Deman; Kholodar, Denis; Juang, Jer-Nan; Dowell, Earl H.</p> <p>2001-01-01</p> <p>The representation of unsteady aerodynamic flow fields in terms of global aerodynamic modes has proven to be a useful <span class="hlt">method</span> for reducing the size of the aerodynamic model over those representations that use local variables at discrete grid points in the flow field. Eigenmodes and Proper Orthogonal Decomposition (POD) modes have been used for this purpose with good effect. This suggests that system identification models may also be used to represent the aerodynamic flow field. Implicit in the use of a systems identification technique is the notion that a relative small state space model can be useful in describing a dynamical system. The POD model is first used to show that indeed a reduced order model can be obtained from a much larger numerical aerodynamical model (the vortex lattice <span class="hlt">method</span> is used for illustrative purposes) and the results from the POD and the system identification <span class="hlt">methods</span> are then compared. For the example considered, the two <span class="hlt">methods</span> are shown to give comparable results in terms of accuracy and reduced model size. The advantages and limitations of each approach are briefly discussed. Both appear promising and complementary in their characteristics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25786520','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25786520"><span>Further development of LLNA:DAE <span class="hlt">method</span> as stand-alone skin-sensitization testing <span class="hlt">method</span> and <span class="hlt">applied</span> for evaluation of relative skin-sensitizing potency between chemicals.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi</p> <p>2015-04-01</p> <p>To date, there has been no well-established local lymph node assay (LLNA) that includes an elicitation phase. Therefore, we developed a modified local lymph node assay with an elicitation phase (LLNA:DAE) to discriminate true skin sensitizers from chemicals that gave borderline positive results and previously reported this assay. To develop the LLNA:DAE <span class="hlt">method</span> as a useful stand-alone testing <span class="hlt">method</span>, we investigated the complete procedure for the LLNA:DAE <span class="hlt">method</span> using hexyl cinnamic aldehyde (HCA), isoeugenol, and 2,4-dinitrochlorobenzene (DNCB) as test compounds. We defined the LLNA:DAE procedure as follows: in the dose-finding test, four concentrations of chemical <span class="hlt">applied</span> to dorsum of the right ear on days 1, 2, and 3 and dorsum of both ears on day 10. Ear thickness and skin irritation score were measured on days 1, 3, 5, 10, and 12. Local lymph nodes were excised and weighed on day 12. The test dose for the primary LLNA:DAE study was selected as the dose that gave the highest left ear lymph node weight in the dose-finding study, or the lowest dose that produced a left ear lymph node of over 4 mg. This procedure was validated using nine different chemicals. Furthermore, qualitative relationship was observed between the degree of elicitation response in the left ear lymph node and the skin sensitizing potency of 32 chemicals tested in this study and the previous study. These results indicated that LLNA:DAE <span class="hlt">method</span> was as first LLNA <span class="hlt">method</span> that was able to evaluate the skin sensitizing potential and potency in elicitation response.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3198141','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3198141"><span>Impact of gene patents on diagnostic testing: a new patent landscaping <span class="hlt">method</span> <span class="hlt">applied</span> to spinocerebellar ataxia</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui</p> <p>2011-01-01</p> <p>Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization <span class="hlt">method</span> to determine which gene patents could indeed be problematic. The <span class="hlt">method</span> is <span class="hlt">applied</span> to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing <span class="hlt">methods</span> and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called ‘gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing. PMID:21811306</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/21513168-optimization-micro-metal-injection-molding-using-grey-relational-grade','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21513168-optimization-micro-metal-injection-molding-using-grey-relational-grade"><span>Optimization of Micro Metal Injection Molding By Using Grey Relational Grade</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ibrahim, M. H. I.; Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia; Muhamad, N.</p> <p>2011-01-17</p> <p>Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising <span class="hlt">method</span> towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is <span class="hlt">applied</span> to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where <span class="hlt">Taguchi</span> <span class="hlt">method</span> associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversionmore » from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900019196','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900019196"><span>Velocity filtering <span class="hlt">applied</span> to optical flow calculations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barniv, Yair</p> <p>1990-01-01</p> <p>Optical flow is a <span class="hlt">method</span> by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is <span class="hlt">applied</span> here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based <span class="hlt">method</span> to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as <span class="hlt">applied</span> to optical flow is expanded and experimental results are presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27857435','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27857435"><span>Numerical <span class="hlt">method</span> of <span class="hlt">applying</span> shadow theory to all regions of multilayered dielectric gratings in conical mounting.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro</p> <p>2016-11-01</p> <p>Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. <span class="hlt">Applying</span> the shadow theory to the matrix eigenvalues <span class="hlt">method</span>, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1914102I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1914102I"><span>Accuracy assessment of the Precise Point Positioning <span class="hlt">method</span> <span class="hlt">applied</span> for surveys and tracking moving objects in GIS environment</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ilieva, Tamara; Gekov, Svetoslav</p> <p>2017-04-01</p> <p>The Precise Point Positioning (PPP) <span class="hlt">method</span> gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP <span class="hlt">method</span> <span class="hlt">applied</span> for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4920957','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4920957"><span><span class="hlt">Applying</span> Sparse Machine Learning <span class="hlt">Methods</span> to Twitter: Analysis of the 2012 Change in Pap Smear Guidelines. A Sequential Mixed-<span class="hlt">Methods</span> Study</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Godbehere, Andrew; Le, Gem; El Ghaoui, Laurent; Sarkar, Urmimala</p> <p>2016-01-01</p> <p>Background It is difficult to synthesize the vast amount of textual data available from social media websites. Capturing real-world discussions via social media could provide insights into individuals’ opinions and the decision-making process. Objective We conducted a sequential mixed <span class="hlt">methods</span> study to determine the utility of sparse machine learning techniques in summarizing Twitter dialogues. We chose a narrowly defined topic for this approach: cervical cancer discussions over a 6-month time period surrounding a change in Pap smear screening guidelines. <span class="hlt">Methods</span> We <span class="hlt">applied</span> statistical methodologies, known as sparse machine learning algorithms, to summarize Twitter messages about cervical cancer before and after the 2012 change in Pap smear screening guidelines by the US Preventive Services Task Force (USPSTF). All messages containing the search terms “cervical cancer,” “Pap smear,” and “Pap test” were analyzed during: (1) January 1–March 13, 2012, and (2) March 14–June 30, 2012. Topic modeling was used to discern the most common topics from each time period, and determine the singular value criterion for each topic. The results were then qualitatively coded from top 10 relevant topics to determine the efficiency of clustering <span class="hlt">method</span> in grouping distinct ideas, and how the discussion differed before vs. after the change in guidelines . Results This machine learning <span class="hlt">method</span> was effective in grouping the relevant discussion topics about cervical cancer during the respective time periods (~20% overall irrelevant content in both time periods). Qualitative analysis determined that a significant portion of the top discussion topics in the second time period directly reflected the USPSTF guideline change (eg, “New Screening Guidelines for Cervical Cancer”), and many topics in both time periods were addressing basic screening promotion and education (eg, “It is Cervical Cancer Awareness Month! Click the link to see where you can receive a free or low</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSV...410...35B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSV...410...35B"><span>Radiation noise of the bearing <span class="hlt">applied</span> to the ceramic motorized spindle based on the sub-source decomposition <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.</p> <p>2017-12-01</p> <p>This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing <span class="hlt">applied</span> to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition <span class="hlt">method</span> is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the <span class="hlt">method</span>. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition <span class="hlt">method</span>. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/29564','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/29564"><span><span class="hlt">Applying</span> the highway safety manual to Georgia.</span></a></p> <p><a target="_blank" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2015-08-01</p> <p>This report examines the Highway Safety Manual (HSM) from the perspective of <span class="hlt">applying</span> its : <span class="hlt">methods</span> and approaches within the state of Georgia. The work presented here focuses : specifically on data requirements and <span class="hlt">methods</span> that may be of particular ...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <footer><a id="backToTop" href="#top"> </a><nav><a id="backToTop" href="#top"> </a><ul class="links"><a id="backToTop" href="#top"> </a><li><a id="backToTop" href="#top"></a><a href="/sitemap.html">Site Map</a></li> <li><a href="/members/index.html">Members Only</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://doe.responsibledisclosure.com/hc/en-us" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> <div class="small">Science.gov is maintained by the U.S. Department of Energy's <a href="https://www.osti.gov/" target="_blank">Office of Scientific and Technical Information</a>, in partnership with <a href="https://www.cendi.gov/" target="_blank">CENDI</a>.</div> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>