Science.gov

Sample records for progressive design methodology

  1. Progress in the Development of a Nozzle Design Methodology for Pulsed Detonation Engines

    NASA Technical Reports Server (NTRS)

    Leary, B. A.; Waltrup, P. J.; Rice, T.; Cybyk, B. Z.

    2002-01-01

    The Johns Hopkins University Applied Physics Laboratory (JHU/APL), in support of the NASA Glenn Research Center (NASA GRC), is investigating performance methodologies and system integration issues related to Pulsed Detonation Engine (PDE) nozzles. The primary goal of this ongoing effort is to develop design and performance assessment methodologies applicable to PDE exit nozzle(s). APL is currently focusing its efforts on a common plenum chamber design that collects the exhaust products from multiple PDE tubes prior to expansion in a single converging-diverging exit nozzle. To accomplish this goal, a time-dependent, quasi-one-dimensional analysis for determining the flow properties in and through a single plenum and exhaust nozzle is underway. In support of these design activities, parallel modeling efforts using commercial Computational Fluid Dynamics (CFD) software are on-going. These efforts include both two and three-dimensional as well as steady and time-dependent computations to assess the flow in and through these devices. This paper discusses the progress in developing this nozzle design methodology.

  2. Establishing Equivalence: Methodological Progress in Group-Matching Design and Analysis

    ERIC Educational Resources Information Center

    Kover, Sara T.; Atwood, Amy K.

    2013-01-01

    This methodological review draws attention to the challenges faced by intellectual and developmental disabilities researchers in the appropriate design and analysis of group comparison studies. We provide a brief overview of matching methodologies in the field, emphasizing group-matching designs used in behavioral research on cognition and…

  3. A Methodology of Analysis for Monitoring Treatment Progression with 19-Channel Z-Score Neurofeedback (19ZNF) in a Single-Subject Design.

    PubMed

    Krigbaum, Genomary; Wigton, Nancy L

    2015-09-01

    19-Channel Z-Score Neurofeedback (19ZNF) is a modality using 19-electrodes with real-time normative database z-scores, suggesting effective clinical outcomes in fewer sessions than traditional neurofeedback. Thus, monitoring treatment progression and clinical outcome is necessary. The area of focus in this study was a methodology of quantitative analysis for monitoring treatment progression and clinical outcome with 19ZNF. This methodology is noted as the Sites-of-Interest, which included repeated measures analyses of variance (rANOVA) and t-tests for z-scores; it was conducted on 10 cases in a single subject design. To avoid selection bias, the 10 sample cases were randomly selected from a pool of 17 cases that met the inclusion criteria. Available client outcome measures (including self-report) are briefly discussed. The results showed 90% of the pre-post comparisons moved in the targeted direction (z = 0) and of those, 96% (80% Bonferroni corrected) of the t-tests and 96% (91% Bonferroni corrected) of the rANOVAs were statistically significant; thus indicating a progression towards the mean in 15 or fewer 19ZNF sessions. All cases showed and reported improvement in all outcome measures (including quantitative electroencephalography assessment) at case termination. PMID:25777656

  4. A Methodology of Analysis for Monitoring Treatment Progression with 19-Channel Z-Score Neurofeedback (19ZNF) in a Single-Subject Design.

    PubMed

    Krigbaum, Genomary; Wigton, Nancy L

    2015-09-01

    19-Channel Z-Score Neurofeedback (19ZNF) is a modality using 19-electrodes with real-time normative database z-scores, suggesting effective clinical outcomes in fewer sessions than traditional neurofeedback. Thus, monitoring treatment progression and clinical outcome is necessary. The area of focus in this study was a methodology of quantitative analysis for monitoring treatment progression and clinical outcome with 19ZNF. This methodology is noted as the Sites-of-Interest, which included repeated measures analyses of variance (rANOVA) and t-tests for z-scores; it was conducted on 10 cases in a single subject design. To avoid selection bias, the 10 sample cases were randomly selected from a pool of 17 cases that met the inclusion criteria. Available client outcome measures (including self-report) are briefly discussed. The results showed 90% of the pre-post comparisons moved in the targeted direction (z = 0) and of those, 96% (80% Bonferroni corrected) of the t-tests and 96% (91% Bonferroni corrected) of the rANOVAs were statistically significant; thus indicating a progression towards the mean in 15 or fewer 19ZNF sessions. All cases showed and reported improvement in all outcome measures (including quantitative electroencephalography assessment) at case termination.

  5. Permanent magnet design methodology

    NASA Technical Reports Server (NTRS)

    Leupold, Herbert A.

    1991-01-01

    Design techniques developed for the exploitation of high energy magnetically rigid materials such as Sm-Co and Nd-Fe-B have resulted in a revolution in kind rather than in degree in the design of a variety of electron guidance structures for ballistic and aerospace applications. Salient examples are listed. Several prototype models were developed. These structures are discussed in some detail: permanent magnet solenoids, transverse field sources, periodic structures, and very high field structures.

  6. Solid lubrication design methodology

    NASA Technical Reports Server (NTRS)

    Aggarwal, B. B.; Yonushonis, T. M.; Bovenkerk, R. L.

    1984-01-01

    A single element traction rig was used to measure the traction forces at the contact of a ball against a flat disc at room temperature under combined rolling and sliding. The load and speed conditions were selected to match those anticipated for bearing applications in adiabatic diesel engines. The test program showed that the magnitude of traction forces were almost the same for all the lubricants tested; a lubricant should, therefore, be selected on the basis of its ability to prevent wear of the contact surfaces. Traction vs. slide/roll ratio curves were similar to those for liquid lubricants but the traction forces were an order of magnitude higher. The test data was used to derive equations to predict traction force as a function of contact stress and rolling speed. Qualitative design guidelines for solid lubricated concentrated contacts are proposed.

  7. Waste Package Design Methodology Report

    SciTech Connect

    D.A. Brownson

    2001-09-28

    The objective of this report is to describe the analytical methods and processes used by the Waste Package Design Section to establish the integrity of the various waste package designs, the emplacement pallet, and the drip shield. The scope of this report shall be the methodology used in criticality, risk-informed, shielding, source term, structural, and thermal analyses. The basic features and appropriateness of the methods are illustrated, and the processes are defined whereby input values and assumptions flow through the application of those methods to obtain designs that ensure defense-in-depth as well as satisfy requirements on system performance. Such requirements include those imposed by federal regulation, from both the U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC), and those imposed by the Yucca Mountain Project to meet repository performance goals. The report is to be used, in part, to describe the waste package design methods and techniques to be used for producing input to the License Application Report.

  8. MEIC Design Progress

    SciTech Connect

    Zhang, Y; Douglas, D; Hutton, A; Krafft, G A; Li, R; Lin, F; Morozov, V S; Nissen, E W; Pilat, F C; Satogata, T; Tennant, C; Terzic, B; Yunn, C; Barber, D P; Filatov, Y; Hyde, C; Kondratenko, A M; Manikonda, S L; Ostroumov, P N; Sullivan, M K

    2012-07-01

    This paper will report the recent progress in the conceptual design of MEIC, a high luminosity medium energy polarized ring-ring electron-ion collider at Jefferson lab. The topics and achievements that will be covered are design of the ion large booster and the ERL-circulator-ring-based electron cooling facility, optimization of chromatic corrections and dynamic aperture studies, schemes and tracking simulations of lepton and ion polarization in the figure-8 collider ring, and the beam-beam and electron cooling simulations. A proposal of a test facility for the MEIC electron cooler will also be discussed.

  9. Space Engineering Projects in Design Methodology

    NASA Technical Reports Server (NTRS)

    Crawford, R.; Wood, K.; Nichols, S.; Hearn, C.; Corrier, S.; DeKunder, G.; George, S.; Hysinger, C.; Johnson, C.; Kubasta, K.

    1993-01-01

    NASA/USRA is an ongoing sponsor of space design projects in the senior design courses of the Mechanical Engineering Department at The University of Texas at Austin. This paper describes the UT senior design sequence, focusing on the first-semester design methodology course. The philosophical basis and pedagogical structure of this course is summarized. A history of the Department's activities in the Advanced Design Program is then presented. The paper includes a summary of the projects completed during the 1992-93 Academic Year in the methodology course, and concludes with an example of two projects completed by student design teams.

  10. Assuring data transparency through design methodologies

    NASA Technical Reports Server (NTRS)

    Williams, Allen

    1990-01-01

    This paper addresses the role of design methodologies and practices in the assurance of technology transparency. The development of several subsystems on large, long life cycle government programs was analyzed to glean those characteristics in the design, development, test, and evaluation that precluded or enabled the insertion of new technology. The programs examined were Minuteman, DSP, B1-B, and space shuttle. All these were long life cycle, technology-intensive programs. The design methodologies (or lack thereof) and design practices for each were analyzed in terms of the success or failure in incorporating evolving technology. Common elements contributing to the success or failure were extracted and compared to current methodologies being proposed by the Department of Defense and NASA. The relevance of these practices to the design and deployment of Space Station Freedom were evaluated. In particular, appropriate methodologies now being used on the core development contract were examined.

  11. A design methodology for unattended monitoring systems

    SciTech Connect

    SMITH,JAMES D.; DELAND,SHARON M.

    2000-03-01

    The authors presented a high-level methodology for the design of unattended monitoring systems, focusing on a system to detect diversion of nuclear materials from a storage facility. The methodology is composed of seven, interrelated analyses: Facility Analysis, Vulnerability Analysis, Threat Assessment, Scenario Assessment, Design Analysis, Conceptual Design, and Performance Assessment. The design of the monitoring system is iteratively improved until it meets a set of pre-established performance criteria. The methodology presented here is based on other, well-established system analysis methodologies and hence they believe it can be adapted to other verification or compliance applications. In order to make this approach more generic, however, there needs to be more work on techniques for establishing evaluation criteria and associated performance metrics. They found that defining general-purpose evaluation criteria for verifying compliance with international agreements was a significant undertaking in itself. They finally focused on diversion of nuclear material in order to simplify the problem so that they could work out an overall approach for the design methodology. However, general guidelines for the development of evaluation criteria are critical for a general-purpose methodology. A poor choice in evaluation criteria could result in a monitoring system design that solves the wrong problem.

  12. General Methodology for Designing Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Condon, Gerald; Ocampo, Cesar; Mathur, Ravishankar; Morcos, Fady; Senent, Juan; Williams, Jacob; Davis, Elizabeth C.

    2012-01-01

    A methodology for designing spacecraft trajectories in any gravitational environment within the solar system has been developed. The methodology facilitates modeling and optimization for problems ranging from that of a single spacecraft orbiting a single celestial body to that of a mission involving multiple spacecraft and multiple propulsion systems operating in gravitational fields of multiple celestial bodies. The methodology consolidates almost all spacecraft trajectory design and optimization problems into a single conceptual framework requiring solution of either a system of nonlinear equations or a parameter-optimization problem with equality and/or inequality constraints.

  13. Applying Software Design Methodology to Instructional Design

    ERIC Educational Resources Information Center

    East, J. Philip

    2004-01-01

    The premise of this paper is that computer science has much to offer the endeavor of instructional improvement. Software design processes employed in computer science for developing software can be used for planning instruction and should improve instruction in much the same manner that design processes appear to have improved software. Techniques…

  14. Pharmacogenetics of Antipsychotics: Recent Progress and Methodological Issues

    PubMed Central

    Zhang, Jian-Ping; Malhotra, Anil K.

    2012-01-01

    Importance of the field Antipsychotic drug is the mainstay of treatment for schizophrenia, and there are large inter-individual differences in clinical response and side effects. Pharmacogenetics provides a valuable tool to fulfill the promise of personalized medicine by tailoring treatment based on one’s genetic markers. Areas covered in this review This article reviews the recent progress in pharmacogenetic research of antipsychotic drugs since 2010, focusing on two areas: antipsychotic-induced weight gain and clozapine-induced agranulocytosis. Important methodological issues in this area of research are discussed. What the reader will gain Readers are expected to learn the up-to-date evidence in pharmacogenetic research, and to gain familiarity to the issues and challenges facing the field. Take home message Pharmacogenetic studies of antipsychotic drugs are promising despite of many challenges. Recent advances as reviewed in this article push the field closer to routine clinical utilization of pharmacogenetic testing. Progress in genomic technology and bioinformatics, larger sample sizes, better phenotype characterization, and careful consideration of study design issues will help to elevate antipsychotic pharmacogenetics to its next level. PMID:23199282

  15. Methodological Alignment in Design-Based Research

    ERIC Educational Resources Information Center

    Hoadley, Christopher M.

    2004-01-01

    Empirical research is all about trying to model and predict the world. In this article, I discuss how design-based research methods can help do this effectively. In particular, design-based research methods can help with the problem of methodological alignment: ensuring that the research methods we use actually test what we think they are testing.…

  16. Design methodology and projects for space engineering

    NASA Technical Reports Server (NTRS)

    Nichols, S.; Kleespies, H.; Wood, K.; Crawford, R.

    1993-01-01

    NASA/USRA is an ongoing sponsor of space design projects in the senior design course of the Mechanical Engineering Department at The University of Texas at Austin. This paper describes the UT senior design sequence, consisting of a design methodology course and a capstone design course. The philosophical basis of this sequence is briefly summarized. A history of the Department's activities in the Advanced Design Program is then presented. The paper concludes with a description of the projects completed during the 1991-92 academic year and the ongoing projects for the Fall 1992 semester.

  17. Waste Package Component Design Methodology Report

    SciTech Connect

    D.C. Mecham

    2004-07-12

    This Executive Summary provides an overview of the methodology being used by the Yucca Mountain Project (YMP) to design waste packages and ancillary components. This summary information is intended for readers with general interest, but also provides technical readers a general framework surrounding a variety of technical details provided in the main body of the report. The purpose of this report is to document and ensure appropriate design methods are used in the design of waste packages and ancillary components (the drip shields and emplacement pallets). The methodology includes identification of necessary design inputs, justification of design assumptions, and use of appropriate analysis methods, and computational tools. This design work is subject to ''Quality Assurance Requirements and Description''. The document is primarily intended for internal use and technical guidance for a variety of design activities. It is recognized that a wide audience including project management, the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission, and others are interested to various levels of detail in the design methods and therefore covers a wide range of topics at varying levels of detail. Due to the preliminary nature of the design, readers can expect to encounter varied levels of detail in the body of the report. It is expected that technical information used as input to design documents will be verified and taken from the latest versions of reference sources given herein. This revision of the methodology report has evolved with changes in the waste package, drip shield, and emplacement pallet designs over many years and may be further revised as the design is finalized. Different components and analyses are at different stages of development. Some parts of the report are detailed, while other less detailed parts are likely to undergo further refinement. The design methodology is intended to provide designs that satisfy the safety and operational

  18. Autism genetics: Methodological issues and experimental design.

    PubMed

    Sacco, Roberto; Lintas, Carla; Persico, Antonio M

    2015-10-01

    Autism is a complex neuropsychiatric disorder of developmental origin, where multiple genetic and environmental factors likely interact resulting in a clinical continuum between "affected" and "unaffected" individuals in the general population. During the last two decades, relevant progress has been made in identifying chromosomal regions and genes in linkage or association with autism, but no single gene has emerged as a major cause of disease in a large number of patients. The purpose of this paper is to discuss specific methodological issues and experimental strategies in autism genetic research, based on fourteen years of experience in patient recruitment and association studies of autism spectrum disorder in Italy.

  19. Performance-based asphalt mixture design methodology

    NASA Astrophysics Data System (ADS)

    Ali, Al-Hosain Mansour

    Today, several State D.O.T.s are being investigating the use of tire rubber with local conventional materials. Several of the ongoing investigations identified potential benefits from the use of these materials, including improvements in material properties and performance. One of the major problems is being associated with the transferability of asphalt rubber technology without appropriately considering the effects of the variety of conventional materials on mixture behavior and performance. Typically, the design of these mixtures is being adapted to the physical properties of the conventional materials by using the empirical Marshall mixture design and without considering fundamental mixture behavior and performance. Use of design criteria related to the most common modes of failure for asphalt mixtures, such as rutting, fatigue cracking, and low temperature thermal cracking have to be developed and used for identifying the "best mixture," in term of performance, for the specific local materials and loading conditions. The main objective of this study was the development of a mixture design methodology that considers mixture behavior and performance. In order to achieve this objective a laboratory investigation able to evaluate mixture properties that can be related to mixture performance, (in terms of rutting, low temperature cracking, moisture damage and fatigue), and simulating the actual field loading conditions that the material is being exposed to, was conducted. The results proved that the inclusion of rubber into asphalt mixtures improved physical characteristics such as elasticity, flexibility, rebound, aging properties, increased fatigue resistance, and reduced rutting potential. The possibility of coupling the traditional Marshall mix design method with parameters related to mixture behavior and performance was investigated. Also, the SHRP SUPERPAVE mix design methodology was reviewed and considered in this study for the development of an integrated

  20. Structural design methodology for large space structures

    NASA Astrophysics Data System (ADS)

    Dornsife, Ralph J.

    1992-02-01

    The Department of Defense requires research and development in designing, fabricating, deploying, and maintaining large space structures (LSS) in support of Army and Strategic Defense Initiative military objectives. Because of their large size, extreme flexibility, and the unique loading conditions in the space environment, LSS will present engineers with problems unlike those encountered in designing conventional civil engineering or aerospace structures. LSS will require sophisticated passive damping and active control systems in order to meet stringent mission requirements. These structures must also be optimally designed to minimize high launch costs. This report outlines a methodology for the structural design of LSS. It includes a definition of mission requirements, structural modeling and analysis, passive damping and active control system design, ground-based testing, payload integration, on-orbit system verification, and on-orbit assessment of structural damage. In support of this methodology, analyses of candidate LSS truss configurations are presented, and an algorithm correlating ground-based test behavior to expected microgravity behavior is developed.

  1. Structural design methodology for large space structures

    NASA Astrophysics Data System (ADS)

    Dornsife, Ralph J.

    The Department of Defense requires research and development in designing, fabricating, deploying, and maintaining large space structures (LSS) in support of Army and Strategic Defense Initiative military objectives. Because of their large size, extreme flexibility, and the unique loading conditions in the space environment, LSS will present engineers with problems unlike those encountered in designing conventional civil engineering or aerospace structures. LSS will require sophisticated passive damping and active control systems in order to meet stringent mission requirements. These structures must also be optimally designed to minimize high launch costs. This report outlines a methodology for the structural design of LSS. It includes a definition of mission requirements, structural modeling and analysis, passive damping and active control system design, ground-based testing, payload integration, on-orbit system verification, and on-orbit assessment of structural damage. In support of this methodology, analyses of candidate LSS truss configurations are presented, and an algorithm correlating ground-based test behavior to expected microgravity behavior is developed.

  2. Control/structure interaction design methodology

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.; Layman, William E.

    1989-01-01

    The Control Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts (such as active structure) and new tools (such as a combined structure and control optimization algorithm) and their verification in ground and possibly flight test. The new CSI design methodology is centered around interdisciplinary engineers using new tools that closely integrate structures and controls. Verification is an important CSI theme and analysts will be closely integrated to the CSI Test Bed laboratory. Components, concepts, tools and algorithms will be developed and tested in the lab and in future Shuttle-based flight experiments. The design methodology is summarized in block diagrams depicting the evolution of a spacecraft design and descriptions of analytical capabilities used in the process. The multiyear JPL CSI implementation plan is described along with the essentials of several new tools. A distributed network of computation servers and workstations was designed that will provide a state-of-the-art development base for the CSI technologies.

  3. Thermal design methodology for attaching morphing components

    NASA Astrophysics Data System (ADS)

    Hermiller, Jason M.; Cable, Kristin M.; Hemmelgarn, Christopher D.; Qi, H. Jerry; Castro, Francisco

    2009-03-01

    Seamless skins for morphing vehicles have been demonstrated as feasible but establishing robust fastening methods for morphing skins is one of the next key challenges. Skin materials previously developed by Cornerstone Research Group and others include high-performance, reinforced elastomeric and shape memory polymer (SMP)-based composites. Recent focus has shifted to improving performance and increasing the technology readiness level of these materials. Cycling of recently demonstrated morphing skins has determined that an abrupt interface between rigid and soft materials leads to localized failure at the interface over time. In this paper, a fundamental understanding between skin material properties and transition zone design are combined with advanced modeling techniques. A thermal gradient methodology is simulated to predict performance benefits. Experimental testing and simulations demonstrated improvement in morphing component performance for a uniaxial case. This work continues to advance development to eliminate fastening as the weak link in morphing skin technology and provides tools for use in morphing structure design.

  4. Sketching Designs Using the Five Design-Sheet Methodology.

    PubMed

    Roberts, Jonathan C; Headleand, Chris; Ritsos, Panagiotis D

    2016-01-01

    Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.

  5. Methodology for Preliminary Design of Electrical Microgrids

    SciTech Connect

    Jensen, Richard P.; Stamp, Jason E.; Eddy, John P.; Henry, Jordan M; Munoz-Ramos, Karina; Abdallah, Tarek

    2015-09-30

    Many critical loads rely on simple backup generation to provide electricity in the event of a power outage. An Energy Surety Microgrid TM can protect against outages caused by single generator failures to improve reliability. An ESM will also provide a host of other benefits, including integration of renewable energy, fuel optimization, and maximizing the value of energy storage. The ESM concept includes a categorization for microgrid value proposi- tions, and quantifies how the investment can be justified during either grid-connected or utility outage conditions. In contrast with many approaches, the ESM approach explic- itly sets requirements based on unlikely extreme conditions, including the need to protect against determined cyber adversaries. During the United States (US) Department of Defense (DOD)/Department of Energy (DOE) Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) effort, the ESM methodology was successfully used to develop the preliminary designs, which direct supported the contracting, construction, and testing for three military bases. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military installations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Melanie Johnson and Harold Sanborn of the U.S. Army Corps of Engineers Construc- tion Engineering Research Laboratory * Experts from the National Renewable Energy Laboratory, Idaho National Laboratory, Oak Ridge National Laboratory, and Pacific Northwest National Laboratory

  6. Enhancing the Front-End Phase of Design Methodology

    ERIC Educational Resources Information Center

    Elias, Erasto

    2006-01-01

    Design methodology (DM) is defined by the procedural path, expressed in design models, and techniques or methods used to untangle the various activities within a design model. Design education in universities is mainly based on descriptive design models. Much knowledge and organization have been built into DM to facilitate design teaching.…

  7. Progressive Designs for New Curricula.

    ERIC Educational Resources Information Center

    Turner, William A.; Belida, Loren; Johnson, William C.

    2000-01-01

    Explores how school building design influences the success of children in preparing for the future. Considerations when renovating and upgrading school design to enhance learning are discussed, including issues of sustainability, collaboration, lighting, and ventilation. (GR)

  8. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 1

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere; Onyebueke, Landon

    1996-01-01

    This program report is the final report covering all the work done on this project. The goal of this project is technology transfer of methodologies to improve design process. The specific objectives are: 1. To learn and understand the Probabilistic design analysis using NESSUS. 2. To assign Design Projects to either undergraduate or graduate students on the application of NESSUS. 3. To integrate the application of NESSUS into some selected senior level courses in Civil and Mechanical Engineering curricula. 4. To develop courseware in Probabilistic Design methodology to be included in a graduate level Design Methodology course. 5. To study the relationship between the Probabilistic design methodology and Axiomatic design methodology.

  9. Methodology for Three Dimensional Nozzle Design

    NASA Technical Reports Server (NTRS)

    Ferri, A.; Dash, S.; Delguidice, P.

    1974-01-01

    Criteria for the selection and methods of analysis for designing a hypersonic scramjet nozzle are discussed. The criteria are based on external and internal flow requirements, related to drag, lift, and pitching moments of the vehicle and thrust of the engine. The steps involved in establishing the criteria are analyzed. Mathematical models of the design procedure are provided.

  10. A design methodology for portable software on parallel computers

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured

  11. Design Study Methodology: Reflections from the Trenches and the Stacks.

    PubMed

    Sedlmair, M; Meyer, M; Munzner, T

    2012-12-01

    Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance available about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies, as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies. Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines. We characterize two axes - a task clarity axis from fuzzy to crisp and an information location axis from the domain expert's head to the computer - and use these axes to reason about design study contributions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn, winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline potential pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research.

  12. Implicit Shape Parameterization for Kansei Design Methodology

    NASA Astrophysics Data System (ADS)

    Nordgren, Andreas Kjell; Aoyama, Hideki

    Implicit shape parameterization for Kansei design is a procedure that use 3D-models, or concepts, to span a shape space for surfaces in the automotive field. A low-dimensional, yet accurate shape descriptor was found by Principal Component Analysis of an ensemble of point-clouds, which were extracted from mesh-based surfaces modeled in a CAD-program. A theoretical background of the procedure is given along with step-by-step instructions for the required data-processing. The results show that complex surfaces can be described very efficiently, and encode design features by an implicit approach that does not rely on error-prone explicit parameterizations. This provides a very intuitive way to explore shapes for a designer, because various design features can simply be introduced by adding new concepts to the ensemble. Complex shapes have been difficult to analyze with Kansei methods due to the large number of parameters involved, but implicit parameterization of design features provides a low-dimensional shape descriptor for efficient data collection, model-building and analysis of emotional content in 3D-surfaces.

  13. Philosophical and Methodological Beliefs of Instructional Design Faculty and Professionals

    ERIC Educational Resources Information Center

    Sheehan, Michael D.; Johnson, R. Burke

    2012-01-01

    The purpose of this research was to probe the philosophical beliefs of instructional designers using sound philosophical constructs and quantitative data collection and analysis. We investigated the philosophical and methodological beliefs of instructional designers, including 152 instructional design faculty members and 118 non-faculty…

  14. Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.

    1999-01-01

    A progressive failure analysis method has been developed for predicting the failure of laminated composite structures under geometrically nonlinear deformations. The progressive failure analysis uses C(exp 1) shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms and several options are available to degrade the material properties after failures. The progressive failure analysis method is implemented in the COMET finite element analysis code and can predict the damage and response of laminated composite structures from initial loading to final failure. The different failure criteria and material degradation methods are compared and assessed by performing analyses of several laminated composite structures. Results from the progressive failure method indicate good correlation with the existing test data except in structural applications where interlaminar stresses are important which may cause failure mechanisms such as debonding or delaminations.

  15. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  16. Surface design methodology - challenge the steel

    NASA Astrophysics Data System (ADS)

    Bergman, M.; Rosen, B.-G.; Eriksson, L.; Anderberg, C.

    2014-03-01

    The way a product or material is experienced by its user could be different depending on the scenario. It is also well known that different materials and surfaces are used for different purposes. When optimizing materials and surface roughness for a certain something with the intention to improve a product, it is important to obtain not only the physical requirements, but also the user experience and expectations. Laws and requirements of the materials and the surface function, but also the conservative way of thinking about materials and colours characterize the design of medical equipment. The purpose of this paper is to link the technical- and customer requirements of current materials and surface textures in medical environments. By focusing on parts of the theory of Kansei Engineering, improvements of the companys' products are possible. The idea is to find correlations between desired experience or "feeling" for a product, -customer requirements, functional requirements, and product geometrical properties -design parameters, to be implemented on new improved products. To be able to find new materials with the same (or better) technical requirements but a higher level of user stimulation, the current material (stainless steel) and its surface (brushed textures) was used as a reference. The usage of focus groups of experts at the manufacturer lead to a selection of twelve possible new materials for investigation in the project. In collaboration with the topical company for this project, three new materials that fulfil the requirements -easy to clean and anti-bacterial came to be in focus for further investigation in regard to a new design of a washer-disinfector for medical equipment using the Kansei based Clean ability approach CAA.

  17. "MARK I" MEASUREMENT METHODOLOGY FOR POLLUTION PREVENTION PROGRESS OCCURRING AS A RESULT OF PRODUCT DECISIONS

    EPA Science Inventory

    A methodology for assessing progress in pollution prevention resulting from product redesign, reformulation or replacement is described. The method compares the pollution generated by the original product with that from the modified or replacement product, taking into account, if...

  18. A Methodology for the Neutronics Design of Space Nuclear Reactors

    SciTech Connect

    King, Jeffrey C.; El-Genk, Mohamed S.

    2004-02-04

    A methodology for the neutronics design of space power reactors is presented. This methodology involves balancing the competing requirements of having sufficient excess reactivity for the desired lifetime, keeping the reactor subcritical at launch and during submersion accidents, and providing sufficient control over the lifetime of the reactor. These requirements are addressed by three reactivity values for a given reactor design: the excess reactivity at beginning of mission, the negative reactivity at shutdown, and the negative reactivity margin in submersion accidents. These reactivity values define the control worth and the safety worth in submersion accidents, used for evaluating the merit of a proposed reactor type and design. The Heat Pipe-Segmented Thermoelectric Module Converters space reactor core design is evaluated and modified based on the proposed methodology. The final reactor core design has sufficient excess reactivity for 10 years of nominal operation at 1.82 MW of fission power and is subcritical at launch and in all water submersion accidents.

  19. A Methodology for the Neutronics Design of Space Nuclear Reactors

    NASA Astrophysics Data System (ADS)

    King, Jeffrey C.; El-Genk, Mohamed S.

    2004-02-01

    A methodology for the neutronics design of space power reactors is presented. This methodology involves balancing the competing requirements of having sufficient excess reactivity for the desired lifetime, keeping the reactor subcritical at launch and during submersion accidents, and providing sufficient control over the lifetime of the reactor. These requirements are addressed by three reactivity values for a given reactor design: the excess reactivity at beginning of mission, the negative reactivity at shutdown, and the negative reactivity margin in submersion accidents. These reactivity values define the control worth and the safety worth in submersion accidents, used for evaluating the merit of a proposed reactor type and design. The Heat Pipe-Segmented Thermoelectric Module Converters space reactor core design is evaluated and modified based on the proposed methodology. The final reactor core design has sufficient excess reactivity for 10 years of nominal operation at 1.82 MW of fission power and is subcritical at launch and in all water submersion accidents.

  20. FOREWORD: Computational methodologies for designing materials Computational methodologies for designing materials

    NASA Astrophysics Data System (ADS)

    Rahman, Talat S.

    2009-02-01

    It would be fair to say that in the past few decades, theory and computer modeling have played a major role in elucidating the microscopic factors that dictate the properties of functional novel materials. Together with advances in experimental techniques, theoretical methods are becoming increasingly capable of predicting properties of materials at different length scales, thereby bringing in sight the long-sought goal of designing material properties according to need. Advances in computer technology and their availability at a reasonable cost around the world have made tit all the more urgent to disseminate what is now known about these modern computational techniques. In this special issue on computational methodologies for materials by design we have tried to solicit articles from authors whose works collectively represent the microcosm of developments in the area. This turned out to be a difficult task for a variety of reasons, not the least of which is space limitation in this special issue. Nevertheless, we gathered twenty articles that represent some of the important directions in which theory and modeling are proceeding in the general effort to capture the ability to produce materials by design. The majority of papers presented here focus on technique developments that are expected to uncover further the fundamental processes responsible for material properties, and for their growth modes and morphological evolutions. As for material properties, some of the articles here address the challenges that continue to emerge from attempts at accurate descriptions of magnetic properties, of electronically excited states, and of sparse matter, all of which demand new looks at density functional theory (DFT). I should hasten to add that much of the success in accurate computational modeling of materials emanates from the remarkable predictive power of DFT, without which we would not be able to place the subject on firm theoretical grounds. As we know and will also

  1. Solid lubrication design methodology, phase 2

    NASA Technical Reports Server (NTRS)

    Pallini, R. A.; Wedeven, L. D.; Ragen, M. A.; Aggarwal, B. B.

    1986-01-01

    The high temperature performance of solid lubricated rolling elements was conducted with a specially designed traction (friction) test apparatus. Graphite lubricants containing three additives (silver, phosphate glass, and zinc orthophosphate) were evaluated from room temperature to 540 C. Two hard coats were also evaluated. The evaluation of these lubricants, using a burnishing method of application, shows a reasonable transfer of lubricant and wear protection for short duration testing except in the 200 C temperature range. The graphite lubricants containing silver and zinc orthophosphate additives were more effective than the phosphate glass material over the test conditions examined. Traction coefficients ranged from a low of 0.07 to a high of 0.6. By curve fitting the traction data, empirical equations for slope and maximum traction coefficient as a function of contact pressure (P), rolling speed (U), and temperature (T) can be developed for each lubricant. A solid lubricant traction model was incorporated into an advanced bearing analysis code (SHABERTH). For comparison purposes, preliminary heat generation calculations were made for both oil and solid lubricated bearing operation. A preliminary analysis indicated a significantly higher heat generation for a solid lubricated ball bearing in a deep groove configuration. An analysis of a cylindrical roller bearing configuration showed a potential for a low friction solid lubricated bearing.

  2. PEM Fuel Cells Redesign Using Biomimetic and TRIZ Design Methodologies

    NASA Astrophysics Data System (ADS)

    Fung, Keith Kin Kei

    Two formal design methodologies, biomimetic design and the Theory of Inventive Problem Solving, TRIZ, were applied to the redesign of a Proton Exchange Membrane (PEM) fuel cell. Proof of concept prototyping was performed on two of the concepts for water management. The liquid water collection with strategically placed wicks concept demonstrated the potential benefits for a fuel cell. Conversely, the periodic flow direction reversal concepts might cause a potential reduction water removal from a fuel cell. The causes of this water removal reduction remain unclear. In additional, three of the concepts generated with biomimetic design were further studied and demonstrated to stimulate more creative ideas in the thermal and water management of fuel cells. The biomimetic design and the TRIZ methodologies were successfully applied to fuel cells and provided different perspectives to the redesign of fuel cells. The methodologies should continue to be used to improve fuel cells.

  3. Methodological Innovation in Practice-Based Design Doctorates

    ERIC Educational Resources Information Center

    Yee, Joyce S. R.

    2010-01-01

    This article presents a selective review of recent design PhDs that identify and analyse the methodological innovation that is occurring in the field, in order to inform future provision of research training. Six recently completed design PhDs are used to highlight possible philosophical and practical models that can be adopted by future PhD…

  4. Architectural Exploration and Design Methodologies of Photonic Interconnection Networks

    NASA Astrophysics Data System (ADS)

    Chan, Jong Wu

    Photonic technology is becoming an increasingly attractive solution to the problems facing today's electronic chip-scale interconnection networks. Recent progress in silicon photonics research has enabled the demonstration of all the necessary optical building blocks for creating extremely high-bandwidth density and energy-efficient links for on- and off-chip communications. From the feasibility and architecture perspective however, photonics represents a dramatic paradigm shift from traditional electronic network designs due to fundamental differences in how electronics and photonics function and behave. As a result of these differences, new modeling and analysis methods must be employed in order to properly realize a functional photonic chip-scale interconnect design. In this work, we present a methodology for characterizing and modeling fundamental photonic building blocks which can subsequently be combined to form full photonic network architectures. We also describe a set of tools which can be utilized to assess the physical-layer and system-level performance properties of a photonic network. The models and tools are integrated in a novel open-source design and simulation environment called PhoenixSim. Next, we leverage PhoenixSim for the study of chip-scale photonic networks. We examine several photonic networks through the synergistic study of both physical-layer metrics and system-level metrics. This holistic analysis method enables us to provide deeper insight into architecture scalability since it considers insertion loss, crosstalk, and power dissipation. In addition to these novel physical-layer metrics, traditional system-level metrics of bandwidth and latency are also obtained. Lastly, we propose a novel routing architecture known as wavelength-selective spatial routing. This routing architecture is analogous to electronic virtual channels since it enables the transmission of multiple logical optical channels through a single physical plane (i.e. the

  5. Enhancing Instructional Design Efficiency: Methodologies Employed by Instructional Designers

    ERIC Educational Resources Information Center

    Roytek, Margaret A.

    2010-01-01

    Instructional systems design (ISD) has been frequently criticised as taking too long to implement, calling for a reduction in cycle time--the time that elapses between project initiation and delivery. While instructional design research has historically focused on increasing "learner" efficiencies, the study of what instructional designers do to…

  6. A design methodology for nonlinear systems containing parameter uncertainty: Application to nonlinear controller design

    NASA Technical Reports Server (NTRS)

    Young, G.

    1982-01-01

    A design methodology capable of dealing with nonlinear systems, such as a controlled ecological life support system (CELSS), containing parameter uncertainty is discussed. The methodology was applied to the design of discrete time nonlinear controllers. The nonlinear controllers can be used to control either linear or nonlinear systems. Several controller strategies are presented to illustrate the design procedure.

  7. Progress in multirate digital control system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1991-01-01

    A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.

  8. Quantum circuit physical design methodology with emphasis on physical synthesis

    NASA Astrophysics Data System (ADS)

    Mohammadzadeh, Naser; Saheb Zamani, Morteza; Sedighi, Mehdi

    2013-11-01

    In our previous works, we have introduced the concept of "physical synthesis" as a method to consider the mutual effects of quantum circuit synthesis and physical design. While physical synthesis can involve various techniques to improve the characteristics of the resulting quantum circuit, we have proposed two techniques (namely gate exchanging and auxiliary qubit selection) to demonstrate the effectiveness of the physical synthesis. However, the previous contributions focused mainly on the physical synthesis concept, and the techniques were proposed only as a proof of concept. In this paper, we propose a methodological framework for physical synthesis that involves all previously proposed techniques along with a newly introduced one (called auxiliary qubit insertion). We will show that the entire flow can be seen as one monolithic methodology. The proposed methodology is analyzed using a large set of benchmarks. Experimental results show that the proposed methodology decreases the average latency of quantum circuits by about 36.81 % for the attempted benchmarks.

  9. Methodology for a stormwater sensitive urban watershed design

    NASA Astrophysics Data System (ADS)

    Romnée, Ambroise; Evrard, Arnaud; Trachte, Sophie

    2015-11-01

    In urban stormwater management, decentralized systems are nowadays worldwide experimented, including stormwater best management practices. However, a watershed-scale approach, relevant for urban hydrology, is almost always neglected when designing a stormwater management plan with best management practices. As a consequence, urban designers fail to convince public authorities of the actual hydrologic effectiveness of such an approach to urban watershed stormwater management. In this paper, we develop a design oriented methodology for studying the morphology of an urban watershed in terms of sustainable stormwater management. The methodology is a five-step method, firstly based on the cartographic analysis of many stormwater relevant indicators regarding the landscape, the urban fabric and the governance. The second step focuses on the identification of many territorial stakes and their corresponding strategies of a decentralized stormwater management. Based on the indicators, the stakes and the strategies, the third step defines many spatial typologies regarding the roadway system and the urban fabric system. The fourth step determines many stormwater management scenarios to be applied to both spatial typologies systems. The fifth step is the design of decentralized stormwater management projects integrating BMPs into each spatial typology. The methodology aims to advise urban designers and engineering offices in the right location and selection of BMPs without given them a hypothetical unique solution. Since every location and every watershed is different due to local guidelines and stakeholders, this paper provide a methodology for a stormwater sensitive urban watershed design that could be reproduced everywhere. As an example, the methodology is applied as a case study to an urban watershed in Belgium, confirming that the method is applicable to any urban watershed. This paper should be helpful for engineering and design offices in urban hydrology to define a

  10. Implementation of Probabilistic Design Methodology at Tennessee State University

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere

    1996-01-01

    Engineering Design is one of the most important areas in engineering education. Deterministic Design Methodology (DDM) is the only design method that is taught in most engineering schools. This method does not give a direct account of uncertainties in design parameters. Hence, it is impossible to quantify the uncertainties in the response and the actual safety margin remains unknown. The desire for a design methodology tha can identify the primitive (random) variables that affect the structural behavior has led to a growing interest on Probabilistic Design Methodology (PDM). This method is gaining more recognition in industries than in educational institutions. Some of the reasons for the limited use of the PDM at the moment are that many are unaware of its potentials, and most of the software developed for PDM are very recent. The central goal of the PDM project at Tennessee State University is to introduce engineering students to the method. The students participating in the project learn about PDM and the computer codes that are available to the design engineer. The software being used of this project is NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) developed under NASA probabilistic structural analysis program. NESSUS has three different modules which make it a very comprehensive computer code for PDM. A research in technology transfer through course offering in PDM is in effect a Tennessee State University. The aim is to familiarize students with the problem of uncertainties in engineering design. Included in the paper are some projects on PDM carried out by some students and faculty. The areas this method is being applied at the moment include, Design of Gears (spur and worm); Design of Shafts; Design of Statistically Indeterminate Frame Structures; Design of Helical Springs; and Design of Shock Absorbers. Some of the current results of these projects are presented.

  11. Probabilistic Based Design Methodology for Solid Oxide Fuel Cell Stacks

    SciTech Connect

    Sun, Xin; Tartakovsky, Alexandre M.; Khaleel, Mohammad A.

    2009-05-01

    A probabilistic-based component design methodology is developed for solid oxide fuel cell (SOFC) stack. This method takes into account the randomness in SOFC material properties as well as the stresses arising from different manufacturing and operating conditions. The purpose of this work is to provide the SOFC designers a design methodology such that desired level of component reliability can be achieved with deterministic design functions using an equivalent safety factor to account for the uncertainties in material properties and structural stresses. Multi-physics-based finite element analyses were used to predict the electrochemical and thermal mechanical responses of SOFC stacks with different geometric variations and under different operating conditions. Failures in the anode and the seal were used as design examples. The predicted maximum principal stresses in the anode and the seal were compared with the experimentally determined strength characteristics for the anode and the seal respectively. Component failure probabilities for the current design were then calculated under different operating conditions. It was found that anode failure probability is very low under all conditions examined. The seal failure probability is relatively high, particularly for high fuel utilization rate under low average cell temperature. Next, the procedures for calculating the equivalent safety factors for anode and seal were demonstrated such that uniform failure probability of the anode and seal can be achieved. Analysis procedures were also included for non-normal distributed random variables such that more realistic distributions of strength and stress can be analyzed using the proposed design methodology.

  12. Modern problems of technical progress and methodological support in medicine

    NASA Astrophysics Data System (ADS)

    Novyc'kyy, Victor V.; Lushchyk, Ulyana B.

    2001-06-01

    A rapid development of modern computer technologies gave a powerful incentive to technical progress as a whole and to medical equipment in particular. A range of medical diagnostic computer-based systems has recently appeared. There are various fields these techniques are used in: roentgenology, ultrasound diagnostic, neurophysiology, angiology etc. The advantages of the use of computer interface are undoubted: it gives possibility to enhance the functioning of the system itself, speed up all technological processes, carry out a reliable and easy archivation and exchange the information obtained. But the most important advantage of using PC in diagnostic system lies in an unlimited freedom of use mathematical algorithms and flexible mathematical models for processing information obtained. On the other hand, the absence of clinical techniques for interpretation of the data obtained is an obstacle to the use of software at its full potential. As a result, a close connection of a scientific development of hardware with high-informative techniques for clinical interpretation of the instrumental data obtained has become topical as never before. We see an optimal solution in introducing the interpretation patterns in conjunction with purely numerical computations into mathematical simulations.

  13. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  14. Viability, Advantages and Design Methodologies of M-Learning Delivery

    ERIC Educational Resources Information Center

    Zabel, Todd W.

    2010-01-01

    The purpose of this study was to examine the viability and principle design methodologies of Mobile Learning models in developing regions. Demographic and market studies were utilized to determine the viability of M-Learning delivery as well as best uses for such technologies and methods given socioeconomic and political conditions within the…

  15. Chicken or Egg? Communicative Methodology or Communicative Syllabus Design.

    ERIC Educational Resources Information Center

    Yalden, Janice

    A consensus has emerged on many issues in communicative language teaching, but one question that needs attention is the question of what ought to constitute the appropriate starting point in the design and implementation of a second language program. Two positions to consider are the following: first, the development of communicative methodology,…

  16. A computer simulator for development of engineering system design methodologies

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Sobieszczanski-Sobieski, J.

    1987-01-01

    A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.

  17. Tornado missile simulation and design methodology. Volume 1: simulation methodology, design applications, and TORMIS computer code. Final report

    SciTech Connect

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and risk has been assessed for a hypothetical nuclear power plant design case study.

  18. FOREWORD: Computational methodologies for designing materials Computational methodologies for designing materials

    NASA Astrophysics Data System (ADS)

    Rahman, Talat S.

    2009-02-01

    It would be fair to say that in the past few decades, theory and computer modeling have played a major role in elucidating the microscopic factors that dictate the properties of functional novel materials. Together with advances in experimental techniques, theoretical methods are becoming increasingly capable of predicting properties of materials at different length scales, thereby bringing in sight the long-sought goal of designing material properties according to need. Advances in computer technology and their availability at a reasonable cost around the world have made tit all the more urgent to disseminate what is now known about these modern computational techniques. In this special issue on computational methodologies for materials by design we have tried to solicit articles from authors whose works collectively represent the microcosm of developments in the area. This turned out to be a difficult task for a variety of reasons, not the least of which is space limitation in this special issue. Nevertheless, we gathered twenty articles that represent some of the important directions in which theory and modeling are proceeding in the general effort to capture the ability to produce materials by design. The majority of papers presented here focus on technique developments that are expected to uncover further the fundamental processes responsible for material properties, and for their growth modes and morphological evolutions. As for material properties, some of the articles here address the challenges that continue to emerge from attempts at accurate descriptions of magnetic properties, of electronically excited states, and of sparse matter, all of which demand new looks at density functional theory (DFT). I should hasten to add that much of the success in accurate computational modeling of materials emanates from the remarkable predictive power of DFT, without which we would not be able to place the subject on firm theoretical grounds. As we know and will also

  19. Failure detection system design methodology. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.

    1980-01-01

    The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.

  20. PROGRESS IN DESIGN OF THE SNS LINAC

    SciTech Connect

    R. HARDEKOPF

    2000-11-01

    The Spallation Neutron Source (SNS) is a six-laboratory collaboration to build an intense pulsed neutron facility at Oak Ridge, TN. The linac design has evolved from the conceptual design presented in 1997 to achieve higher initial performance and to incorporate desirable upgrade features. The linac will initially produce 2-MW beam power using a combination of radio-frequency quadruple (RFQ) linac, drift-tube linac (DTL), coupled-cavity linac (CCL), and superconducting-cavity linac (SCL). Designs of each of these elements support the high peak intensity and high quality beam required for injection into the SNS accumulator ring. This paper will trace the evolution of the linac design, the cost and performance factors that drove architecture decisions, and the progress made in the R&D program.

  1. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  2. Extending Design Science Research Methodology for a Multicultural World

    NASA Astrophysics Data System (ADS)

    Lawrence, Carl; Tuunanen, Tuure; Myers, Michael D.

    Design science research (DSR) is a relatively new approach in information systems research. A fundamental tenet of DSR is that understanding comes from creating information technology artifacts. However, with an increasingly connected and globalized world, designing IT artifacts for a multicultural world is a challenge. The purpose of this paper, therefore, is to propose extending the DSR methodology by integrating critical ethnography to the evaluation phase. Critical ethnography provides a way for IS researchers using DSR to better understand culture, and may help to ensure that IT artifacts are designed for a variety of cultural contexts.

  3. Progressive failure methodologies for predicting residual strength and life of laminated composites

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Allen, David H.; Obrien, T. Kevin

    1991-01-01

    Two progressive failure methodologies currently under development by the Mechanics of Materials Branch at NASA Langley Research Center are discussed. The damage tolerance/fail safety methodology developed by O'Brien is an engineering approach to ensuring adequate durability and damage tolerance by treating only delamination onset and the subsequent delamination accumulation through the laminate thickness. The continuum damage model developed by Allen and Harris employs continuum damage laws to predict laminate strength and life. The philosophy, mechanics framework, and current implementation status of each methodology are presented.

  4. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  5. An automated methodology development. [software design for combat simulation

    NASA Technical Reports Server (NTRS)

    Hawley, L. R.

    1985-01-01

    The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.

  6. Development of a Design Methodology for Reconfigurable Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.; McLean, C.

    2000-01-01

    A methodology is presented for the design of flight control systems that exhibit stability and performance-robustness in the presence of actuator failures. The design is based upon two elements. The first element consists of a control law that will ensure at least stability in the presence of a class of actuator failures. This law is created by inner-loop, reduced-order, linear dynamic inversion, and outer-loop compensation based upon Quantitative Feedback Theory. The second element consists of adaptive compensators obtained from simple and approximate time-domain identification of the dynamics of the 'effective vehicle' with failed actuator(s). An example involving the lateral-directional control of a fighter aircraft is employed both to introduce the proposed methodology and to demonstrate its effectiveness and limitations.

  7. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  8. Methodology for object-oriented real-time systems analysis and design: Software engineering

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1991-01-01

    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.

  9. A Progressive Damage Methodology for Residual Strength Predictions of Notched Composite Panels

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.

    1998-01-01

    The translaminate fracture behavior of carbon/epoxy structural laminates with through-penetration notches was investigated to develop a residual strength prediction methodology for composite structures. An experimental characterization of several composite materials systems revealed a fracture resistance behavior that was very similar to the R-curve behavior exhibited by ductile metals. Fractographic examinations led to the postulate that the damage growth resistance was primarily due to fractured fibers in the principal load-carrying plies being bridged by intact fibers of the adjacent plies. The load transfer associated with this bridging mechanism suggests that a progressive damage analysis methodology will be appropriate for predicting the residual strength of laminates with through-penetration notches. A progressive damage methodology developed by the authors was used to predict the initiation and growth of matrix cracks and fiber fracture. Most of the residual strength predictions for different panel widths, notch lengths, and material systems were within about 10% of the experimental failure loads.

  10. Thin Film Heat Flux Sensors: Design and Methodology

    NASA Technical Reports Server (NTRS)

    Fralick, Gustave C.; Wrbanek, John D.

    2013-01-01

    Thin Film Heat Flux Sensors: Design and Methodology: (1) Heat flux is one of a number of parameters, together with pressure, temperature, flow, etc. of interest to engine designers and fluid dynamists, (2) The measurement of heat flux is of interest in directly determining the cooling requirements of hot section blades and vanes, and (3)In addition, if the surface and gas temperatures are known, the measurement of heat flux provides a value for the convective heat transfer coefficient that can be compared with the value provided by CFD codes.

  11. Power processing methodology. [computerized design of spacecraft electric power systems

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hansen, I. G.; Hayden, J. H.

    1974-01-01

    Discussion of the interim results of a program to investigate the feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems. The object of the total program is to develop a flexible engineering tool which will allow the power processor designer to effectively and rapidly assess and analyze the tradeoffs available by providing, in one comprehensive program, a mathematical model, an analysis of expected performance, simulation, and a comparative evaluation with alternative designs. This requires an understanding of electrical power source characteristics and the effects of load control, protection, and total system interaction.

  12. Integrating rock mechanics issues with repository design through design process principles and methodology

    SciTech Connect

    Bieniawski, Z.T.

    1996-04-01

    A good designer needs not only knowledge for designing (technical know-how that is used to generate alternative design solutions) but also must have knowledge about designing (appropriate principles and systematic methodology to follow). Concepts such as {open_quotes}design for manufacture{close_quotes} or {open_quotes}concurrent engineering{close_quotes} are widely used in the industry. In the field of rock engineering, only limited attention has been paid to the design process because design of structures in rock masses presents unique challenges to the designers as a result of the uncertainties inherent in characterization of geologic media. However, a stage has now been reached where we are be able to sufficiently characterize rock masses for engineering purposes and identify the rock mechanics issues involved but are still lacking engineering design principles and methodology to maximize our design performance. This paper discusses the principles and methodology of the engineering design process directed to integrating site characterization activities with design, construction and performance of an underground repository. Using the latest information from the Yucca Mountain Project on geology, rock mechanics and starter tunnel design, the current lack of integration is pointed out and it is shown how rock mechanics issues can be effectively interwoven with repository design through a systematic design process methodology leading to improved repository performance. In essence, the design process is seen as the use of design principles within an integrating design methodology, leading to innovative problem solving. In particular, a new concept of {open_quotes}Design for Constructibility and Performance{close_quotes} is introduced. This is discussed with respect to ten rock mechanics issues identified for repository design and performance.

  13. When Playing Meets Learning: Methodological Framework for Designing Educational Games

    NASA Astrophysics Data System (ADS)

    Linek, Stephanie B.; Schwarz, Daniel; Bopp, Matthias; Albert, Dietrich

    Game-based learning builds upon the idea of using the motivational potential of video games in the educational context. Thus, the design of educational games has to address optimizing enjoyment as well as optimizing learning. Within the EC-project ELEKTRA a methodological framework for the conceptual design of educational games was developed. Thereby state-of-the-art psycho-pedagogical approaches were combined with insights of media-psychology as well as with best-practice game design. This science-based interdisciplinary approach was enriched by enclosed empirical research to answer open questions on educational game-design. Additionally, several evaluation-cycles were implemented to achieve further improvements. The psycho-pedagogical core of the methodology can be summarized by the ELEKTRA's 4Ms: Macroadaptivity, Microadaptivity, Metacognition, and Motivation. The conceptual framework is structured in eight phases which have several interconnections and feedback-cycles that enable a close interdisciplinary collaboration between game design, pedagogy, cognitive science and media psychology.

  14. Acceptance testing for PACS: from methodology to design to implementation

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Huang, H. K.

    2004-04-01

    Acceptance Testing (AT) is a crucial step in the implementation process of a PACS within a clinical environment. AT determines whether the PACS is ready for clinical use and marks the official sign off of the PACS product. Most PACS vendors have Acceptance Testing (AT) plans, however, these plans do not provide a complete and robust evaluation of the full system. In addition, different sites will have different special requirements that vendor AT plans do not cover. The purpose of this paper is to introduce a protocol for AT design and present case studies of AT performed on clinical PACS. A methodology is presented that includes identifying testing components within PACS, quality assurance for both functionality and performance, and technical testing focusing on key single points-of-failure within the PACS product. Tools and resources that provide assistance in performing AT are discussed. In addition, implementation of the AT within the clinical environment and the overall implementation timeline of the PACS process are presented. Finally, case studies of actual AT of clinical PACS performed in the healthcare environment will be reviewed. The methodology for designing and implementing a robust AT plan for PACS was documented and has been used in PACS acceptance tests in several sites. This methodology can be applied to any PACS and can be used as a validation for the PACS product being acquired by radiology departments and hospitals. A methodology for AT design and implementation was presented that can be applied to future PACS installations. A robust AT plan for a PACS installation can increase both the utilization and satisfaction of a successful implementation of a PACS product that benefits both vendor and customer.

  15. Implementation of probabilistic design methodology at Tennessee State University

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere

    1995-01-01

    The fact that Deterministic Design Method no longer satisfies most design needs calls for methods that will cope with the high trend in technology. The advance in computer technology has reduced the rigors that normally accompany many design analysis methods that account for uncertainties in design parameters. Probabilistic Design Methodology (PDM) is beginning to make impact in engineering design. This method is gaining more recognition in industries than in educational institutions. Some of the reasons for the limited use of the PDM at the moment are that many are unaware of its potentials, and most of the software developed for PDM are very recent. The central goal of the PDM project at Tennessee State University is to introduce engineering students to this method. The students participating in the project learn about PDM and the computer codes that are available to the design engineer. The software being used for this project is NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) developed under NASA probabilistic structural analysis program. NESSUS has three different modules which make it a very comprehensive computer code for PDM. Since this method is new to the students, its introduction into the engineering curriculum is to be in stages. These range from the introduction of PDM and its software to the applications. While this program is being developed for its eventual inclusion into the engineering curriculum, some graduate and undergraduate students are already carrying out some projects using this method. As the students are increasing their understanding on PDM, they are at the same time applying it to some common design problems. The areas this method is being applied at the moment include, Design of Gears (spur and worm); Design of Brakes; Design of Heat Exchangers Design of Helical Springs; and Design of Shock Absorbers. Some of the current results of these projects are presented.

  16. The design and methodology of premature ejaculation interventional studies

    PubMed Central

    2016-01-01

    Large well-designed clinical efficacy and safety randomized clinical trials (RCTs) are required to achieve regulatory approval of new drug treatments. The objective of this article is to make recommendations for the criteria for defining and selecting the clinical trial study population, design and efficacy outcomes measures which comprise ideal premature ejaculation (PE) interventional trial methodology. Data on clinical trial design, epidemiology, definitions, dimensions and psychological impact of PE was reviewed, critiqued and incorporated into a series of recommendations for standardisation of PE clinical trial design, outcome measures and reporting using the principles of evidence based medicine. Data from PE interventional studies are only reliable, interpretable and capable of being generalised to patients with PE, when study populations are defined by the International Society for Sexual Medicine (ISSM) multivariate definition of PE. PE intervention trials should employ a double-blind RCT methodology and include placebo control, active standard drug control, and/or dose comparison trials. Ejaculatory latency time (ELT) and subject/partner outcome measures of control, personal/partner/relationship distress and other study-specific outcome measures should be used as outcome measures. There is currently no published literature which identifies a clinically significant threshold response to intervention. The ISSM definition of PE reflects the contemporary understanding of PE and represents the state-of-the-art multi-dimensional definition of PE and is recommended as the basis of diagnosis of PE for all PE clinical trials. PMID:27652224

  17. The design and methodology of premature ejaculation interventional studies.

    PubMed

    McMahon, Chris G

    2016-08-01

    Large well-designed clinical efficacy and safety randomized clinical trials (RCTs) are required to achieve regulatory approval of new drug treatments. The objective of this article is to make recommendations for the criteria for defining and selecting the clinical trial study population, design and efficacy outcomes measures which comprise ideal premature ejaculation (PE) interventional trial methodology. Data on clinical trial design, epidemiology, definitions, dimensions and psychological impact of PE was reviewed, critiqued and incorporated into a series of recommendations for standardisation of PE clinical trial design, outcome measures and reporting using the principles of evidence based medicine. Data from PE interventional studies are only reliable, interpretable and capable of being generalised to patients with PE, when study populations are defined by the International Society for Sexual Medicine (ISSM) multivariate definition of PE. PE intervention trials should employ a double-blind RCT methodology and include placebo control, active standard drug control, and/or dose comparison trials. Ejaculatory latency time (ELT) and subject/partner outcome measures of control, personal/partner/relationship distress and other study-specific outcome measures should be used as outcome measures. There is currently no published literature which identifies a clinically significant threshold response to intervention. The ISSM definition of PE reflects the contemporary understanding of PE and represents the state-of-the-art multi-dimensional definition of PE and is recommended as the basis of diagnosis of PE for all PE clinical trials. PMID:27652224

  18. The design and methodology of premature ejaculation interventional studies

    PubMed Central

    2016-01-01

    Large well-designed clinical efficacy and safety randomized clinical trials (RCTs) are required to achieve regulatory approval of new drug treatments. The objective of this article is to make recommendations for the criteria for defining and selecting the clinical trial study population, design and efficacy outcomes measures which comprise ideal premature ejaculation (PE) interventional trial methodology. Data on clinical trial design, epidemiology, definitions, dimensions and psychological impact of PE was reviewed, critiqued and incorporated into a series of recommendations for standardisation of PE clinical trial design, outcome measures and reporting using the principles of evidence based medicine. Data from PE interventional studies are only reliable, interpretable and capable of being generalised to patients with PE, when study populations are defined by the International Society for Sexual Medicine (ISSM) multivariate definition of PE. PE intervention trials should employ a double-blind RCT methodology and include placebo control, active standard drug control, and/or dose comparison trials. Ejaculatory latency time (ELT) and subject/partner outcome measures of control, personal/partner/relationship distress and other study-specific outcome measures should be used as outcome measures. There is currently no published literature which identifies a clinically significant threshold response to intervention. The ISSM definition of PE reflects the contemporary understanding of PE and represents the state-of-the-art multi-dimensional definition of PE and is recommended as the basis of diagnosis of PE for all PE clinical trials.

  19. Behavioral headache research: methodologic considerations and research design alternatives.

    PubMed

    Hursey, Karl G; Rains, Jeanetta C; Penzien, Donald B; Nash, Justin M; Nicholson, Robert A

    2005-05-01

    Behavioral headache treatments have garnered solid empirical support in recent years, but there is substantial opportunity to strengthen the next generation of studies with improved methods and consistency across studies. Recently, Guidelines for Trials of Behavioral Treatments for Recurrent Headache were published to facilitate the production of high-quality research. The present article compliments the guidelines with a discussion of methodologic and research design considerations. Since there is no research design that is applicable in every situation, selecting an appropriate research design is fundamental to producing meaningful results. Investigators in behavioral headache and other areas of research consider the developmental phase of the research, the principle objectives of the project, and the sources of error or alternative interpretations in selecting a design. Phases of clinical trials typically include pilot studies, efficacy studies, and effectiveness studies. These trials may be categorized as primarily pragmatic or explanatory. The most appropriate research designs for these different phases and different objectives vary on such characteristics as sample size and assignment to condition, types of control conditions, periods or frequency of measurement, and the dimensions along which comparisons are made. A research design also must fit within constraints on available resources. There are a large number of potential research designs that can be used and considering these characteristics allows selection of appropriate research designs.

  20. Methodological Considerations in Designing and Evaluating Animal-Assisted Interventions

    PubMed Central

    Stern, Cindy; Chur-Hansen, Anna

    2013-01-01

    Simple Summary There is a growing literature on the benefits of companion animals to human mental and physical health. Despite the literature base, these benefits are not well understood, because of flawed methodologies. This paper draws upon four systematic reviews, focusing exclusively on the use of canine-assisted interventions for older people residing in long-term care. Two guides are offered for researchers, one for qualitative research, one for quantitative studies, in order to improve the empirical basis of knowledge. Research in the area of the human-animal bond and the potential benefits that derive from it can be better promoted with the use of uniform and rigorous methodological approaches. Abstract This paper presents a discussion of the literature on animal-assisted interventions and describes limitations surrounding current methodological quality. Benefits to human physical, psychological and social health cannot be empirically confirmed due to the methodological limitations of the existing body of research, and comparisons cannot validly be made across different studies. Without a solid research base animal-assisted interventions will not receive recognition and acceptance as a credible alternative health care treatment. The paper draws on the work of four systematic reviews conducted over April–May 2009, with no date restrictions, focusing exclusively on the use of canine-assisted interventions for older people residing in long-term care. The reviews revealed a lack of good quality studies. Although the literature base has grown in volume since its inception, it predominantly consists of anecdotal accounts and reports. Experimental studies undertaken are often flawed in aspects of design, conduct and reporting. There are few qualitative studies available leading to the inability to draw definitive conclusions. It is clear that due to the complexities associated with these interventions not all weaknesses can be eliminated. However, there are

  1. Aircraft conceptual design - an adaptable parametric sizing methodology

    NASA Astrophysics Data System (ADS)

    Coleman, Gary John, Jr.

    Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to

  2. Design Progress of the Ignitor Machine

    NASA Astrophysics Data System (ADS)

    Cucchiaro, A.; Ignitor Project Group; Orlandi, S.; Vivaldi, F.

    1999-11-01

    The design activity of the Ignitor machine has progressed in the definition of all the components making up the load assembly, and with calculations for new operational scenarios. The concept design of the cryostat and neutron shield have been finalized to allow hands-on interventions around the machine. The cryostat is under vacuum and it is segmented for easy access to every part of the machine. The bottom section carries all the feedthroughs for the electrical and cooling systems. The design of the latter has been fully integrated with the overall machine assembly and satisfies engineering, manufacturing, assembling and operational requirements. All components and coils are cooled by helium gas at 30 K whose flow is set and controlled for each component. The vacuum vessel supports have been upgraded and strenghtened according to recent calculations. The vessel is locked to the C-clamps only during the pulse. A more precise calculation of the magnetic field ripple profile has been performed, as a starting point for a new estimate of the stresses in case of accidental short circuit in the coils. The general problem of evaluating the forces on the coils following a plasma disruption is being re-analyzed. A new calculation of the flux requirements is also under way, taking into account the whole structure of the machine.

  3. A Research Methodology for Green IT Systems Based on WSR and Design Science: The Case of a Chinese Company

    NASA Astrophysics Data System (ADS)

    Zhong, Yinghong; Liu, Hongwei

    Currently green IT has been a hotspot in both practice and research fields. Much progress has been made in the aspect of green technologies. However, researchers and designers could not simply build up a green IT system from technological aspect, which is normally considered as a wicked problem. This paper puts forward a research methodology for green IT systems by introducing WSR and design science. This methodology absorbs essence from soft systems methodology and action research. It considers the research, design and building of green IT systems from a systemic perspective which can be divided into as technological dimension, management dimension and human dimension. This methodology consists of 7 iterated stages. Each stage is presented and followed by a case study from a Chinese company.

  4. Unified methodology for fire safety assessment and optimal design

    SciTech Connect

    Shetty, N.K.; Deaves, D.M.; Gierlinski, J.T.; Dogliani, M.

    1996-12-31

    The paper presents a unified, fully-probabilistic approach to fire safety assessment and optimal design of fire protection on offshore topside structures. The methodology has been developed by integrating Quantitative Risk Analysis (QRA) techniques with the modern methods of Structural System Reliability Analysis (SRA) and Reliability Based Design Optimization (RBDO). The integration has been achieved by using platform-specific extended event-trees which model in detail the escalation paths leading to the failure of Temporary Refuge (TR), Escape, Evacuation and Rescue (EER) systems or structural collapse of the topside. Probabilities of events for which historical data are not generally available are calculated using structural reliability methods. The optimization of fire protection is performed such that the total expected cost of the protection system and the cost of failure of the platform (loss of life, loss of asset, environmental damage) is minimized while satisfying reliability constraints.

  5. Fast underdetermined BSS architecture design methodology for real time applications.

    PubMed

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture. PMID:26737514

  6. Design Evolution and Methodology for Pumpkin Super-Pressure Balloons

    NASA Astrophysics Data System (ADS)

    Farley, Rodger

    The NASA Ultra Long Duration Balloon (ULDB) program has had many technical development issues discovered and solved along its road to success as a new vehicle. It has the promise of being a sub-satellite, a means to launch up to 2700 kg to 33.5 km altitude for 100 days from a comfortable mid-latitude launch point. Current high-lift long duration ballooning is accomplished out of Antarctica with zero-pressure balloons, which cannot cope with the rigors of diurnal cycles. The ULDB design is still evolving, the product of intense analytical effort, scaled testing, improved manufacturing, and engineering intuition. The past technical problems, in particular the s-cleft deformation, their solutions, future challenges, and the methodology of pumpkin balloon design will generally be described.

  7. Clinical Research Methodology 1: Study Designs and Methodologic Sources of Error.

    PubMed

    Sessler, Daniel I; Imrey, Peter B

    2015-10-01

    Clinical research can be categorized by the timing of data collection: retrospective or prospective. Clinical research also can be categorized by study design. In case-control studies, investigators compare previous exposures (including genetic and other personal factors, environmental influences, and medical treatments) among groups distinguished by later disease status (broadly defined to include the development of disease or response to treatment). In cohort studies, investigators compare subsequent incidences of disease among groups distinguished by one or more exposures. Comparative clinical trials are prospective cohort studies that compare treatments assigned to patients by the researchers. Most errors in clinical research findings arise from 5 largely distinguishable classes of methodologic problems: selection bias, confounding, measurement bias, reverse causation, and excessive chance variation.

  8. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  9. A design methodology for biologically inspired dry fibrillar adhesives

    NASA Astrophysics Data System (ADS)

    Aksak, Burak

    Realization of the unique aspects of gecko adhesion and incorporating these aspects into a comprehensive design methodology is essential to enable fabrication of application oriented gecko-inspired dry fibrillar adhesives. To address the need for such a design methodology, we propose a fibrillar adhesion model that evaluates the effect of fiber dimensions and material on adhesive performance of fiber arrays. A fibrillar adhesion model is developed to predict the adhesive characteristics of an array of fibrillar structures, and quantify the effect of fiber length, radius, spacing, and material. Photolithography techniques were utilized to fabricate elastomer microfiber arrays. Fibers that are fabricated from stiff SU-8 photoresist are used to fabricate a flexible negative mold that facilitates fabrication of fiber arrays from various elastomers with high yield. The tips of the cylindrical fibers are modified to mushroom-like tip shapes. Adhesive strengths in excess of 100 kPa is obtained with mushroom tipped elastomer microfibers. Vertically aligned carbon nanofibers (VACNFs) are utilized as enhanced friction materials by partially embedding inside soft polyurethanes. Friction coefficients up to 1 were repeatedly obtained from the resulting VACNF composite structures. A novel fabrication method is used to attach Poly(n-butyl acrylate) (PBA) molecular brush-like structures on the surface of polydimethylsiloxane (PDMS). These brushes are grown on unstructured PDMS and PDMS fibers with mushroom tips. Pull-off force is enhanced by up to 7 times with PBA brush grafted micro-fiber arrays over unstructured PDMS substrate. Adhesion model, initially developed for curved smooth surfaces, is extended to self-affine fractal surfaces to better reflect the adhesion performance of fiber arrays on natural surfaces. Developed adhesion model for fiber arrays is used in an optimization scheme which estimates optimal design parameters to obtain maximum adhesive strength on a given

  10. A symbolic methodology to improve disassembly process design.

    PubMed

    Rios, Pedro; Blyler, Leslie; Tieman, Lisa; Stuart, Julie Ann; Grant, Ed

    2003-12-01

    Millions of end-of-life electronic components are retired annually due to the proliferation of new models and their rapid obsolescence. The recovery of resources such as plastics from these goods requires their disassembly. The time required for each disassembly and its associated cost is defined by the operator's familiarity with the product design and its complexity. Since model proliferation serves to complicate an operator's learning curve, it is worthwhile to investigate the benefits to be gained in a disassembly operator's preplanning process. Effective disassembly process design demands the application of green engineering principles, such as those developed by Anastas and Zimmerman (Environ. Sci. Technol. 2003, 37, 94A-101A), which include regard for product complexity, structural commonality, separation energy, material value, and waste prevention. This paper introduces the concept of design symbolsto help the operator more efficiently survey product complexity with respect to location and number of fasteners to remove a structure that is common to all electronics: the housing. With a sample of 71 different computers, printers, and monitors, we demonstrate that appropriate symbols reduce the total disassembly planning time by 13.2 min. Such an improvement could well make efficient the separation of plastic that would otherwise be destined for waste-to-energy or landfill. The symbolic methodology presented may also improve Design for Recycling and Design for Maintenance and Support.

  11. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  12. A review and synthesis of late Pleistocene extinction modeling: progress delayed by mismatches between ecological realism, interpretation, and methodological transparency.

    PubMed

    Yule, Jeffrey V; Fournier, Robert J; Jensen, Christopher X J; Yang, Jinyan

    2014-06-01

    Late Pleistocene extinctions occurred globally over a period of about 50,000 years, primarily affecting mammals of > or = 44 kg body mass (i.e., megafauna) first in Australia, continuing in Eurasia and, finally, in the Americas. Polarized debate about the cause(s) of the extinctions centers on the role of climate change and anthropogenic factors (especially hunting). Since the late 1960s, investigators have developed mathematical models to simulate the ecological interactions that might have contributed to the extinctions. Here, we provide an overview of the various methodologies used and conclusions reached in the modeling literature, addressing both the strengths and weaknesses of modeling as an explanatory tool. Although late Pleistocene extinction models now provide a solid foundation for viable future work, we conclude, first, that single models offer less compelling support for their respective explanatory hypotheses than many realize; second, that disparities in methodology (both in terms of model parameterization and design) prevent meaningful comparison between models and, more generally, progress from model to model in increasing our understanding of these extinctions; and third, that recent models have been presented and possibly developed without sufficient regard for the transparency of design that facilitates scientific progress. PMID:24984323

  13. A review and synthesis of late Pleistocene extinction modeling: progress delayed by mismatches between ecological realism, interpretation, and methodological transparency.

    PubMed

    Yule, Jeffrey V; Fournier, Robert J; Jensen, Christopher X J; Yang, Jinyan

    2014-06-01

    Late Pleistocene extinctions occurred globally over a period of about 50,000 years, primarily affecting mammals of > or = 44 kg body mass (i.e., megafauna) first in Australia, continuing in Eurasia and, finally, in the Americas. Polarized debate about the cause(s) of the extinctions centers on the role of climate change and anthropogenic factors (especially hunting). Since the late 1960s, investigators have developed mathematical models to simulate the ecological interactions that might have contributed to the extinctions. Here, we provide an overview of the various methodologies used and conclusions reached in the modeling literature, addressing both the strengths and weaknesses of modeling as an explanatory tool. Although late Pleistocene extinction models now provide a solid foundation for viable future work, we conclude, first, that single models offer less compelling support for their respective explanatory hypotheses than many realize; second, that disparities in methodology (both in terms of model parameterization and design) prevent meaningful comparison between models and, more generally, progress from model to model in increasing our understanding of these extinctions; and third, that recent models have been presented and possibly developed without sufficient regard for the transparency of design that facilitates scientific progress.

  14. Finite-element/progressive-lattice-sampling response surface methodology and application to benchmark probability quantification problems

    SciTech Connect

    Romero, V.J.; Bankston, S.D.

    1998-03-01

    Optimal response surface construction is being investigated as part of Sandia discretionary (LDRD) research into Analytic Nondeterministic Methods. The goal is to achieve an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. This is important in global optimization and in estimation of system probabilistic response, which are both made more viable by replacing large complex computer models with fast-running accurate and noiseless approximations. A Finite Element/Lattice Sampling (FE/LS) methodology for constructing progressively refined finite element response surfaces that reuse previous generations of samples is described here. Similar finite element implementations can be extended to N-dimensional problems and/or random fields and applied to other types of structured sampling paradigms, such as classical experimental design and Gauss, Lobatto, and Patterson sampling. Here the FE/LS model is applied in a ``decoupled`` Monte Carlo analysis of two sets of probability quantification test problems. The analytic test problems, spanning a large range of probabilities and very demanding failure region geometries, constitute a good testbed for comparing the performance of various nondeterministic analysis methods. In results here, FE/LS decoupled Monte Carlo analysis required orders of magnitude less computer time than direct Monte Carlo analysis, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer-model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling.

  15. Towards a Methodology for the Design of Multimedia Public Access Interfaces.

    ERIC Educational Resources Information Center

    Rowley, Jennifer

    1998-01-01

    Discussion of information systems methodologies that can contribute to interface design for public access systems covers: the systems life cycle; advantages of adopting information systems methodologies; soft systems methodologies; task-oriented approaches to user interface design; holistic design, the Star model, and prototyping; the…

  16. Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.

    2012-01-01

    This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.

  17. Bond energy analysis revisited and designed toward a rigorous methodology

    NASA Astrophysics Data System (ADS)

    Nakai, Hiromi; Ohashi, Hideaki; Imamura, Yutaka; Kikuchi, Yasuaki

    2011-09-01

    The present study theoretically revisits and numerically assesses two-body energy decomposition schemes including a newly proposed one. The new decomposition scheme is designed to make the equilibrium bond distance equivalent with the minimum point of bond energies. Although the other decomposition schemes generally predict the wrong order of the C-C bond strengths of C2H2, C2H4, and C2H6, the new decomposition scheme is capable of reproducing the C-C bond strengths. Numerical assessment on a training set of molecules demonstrates that the present scheme exhibits a stronger correlation with bond dissociation energies than the other decomposition schemes do, which suggests that the new decomposition scheme is a reliable and powerful analysis methodology.

  18. SSME Investment in Turbomachinery Inducer Impeller Design Tools and Methodology

    NASA Technical Reports Server (NTRS)

    Zoladz, Thomas; Mitchell, William; Lunde, Kevin

    2010-01-01

    Within the rocket engine industry, SSME turbomachines are the de facto standards of success with regard to meeting aggressive performance requirements under challenging operational environments. Over the Shuttle era, SSME has invested heavily in our national inducer impeller design infrastructure. While both low and high pressure turbopump failures/anomaly resolution efforts spurred some of these investments, the SSME program was a major benefactor of key areas of turbomachinery inducer-impeller research outside of flight manifest pressures. Over the past several decades, key turbopump internal environments have been interrogated via highly instrumented hot-fire and cold-flow testing. Likewise, SSME has sponsored the advancement of time accurate and cavitating inducer impeller computation fluid dynamics (CFD) tools. These investments together have led to a better understanding of the complex internal flow fields within aggressive high performing inducers and impellers. New design tools and methodologies have evolved which intend to provide confident blade designs which strike an appropriate balance between performance and self induced load management.

  19. A novel methodology for building robust design rules by using design based metrology (DBM)

    NASA Astrophysics Data System (ADS)

    Lee, Myeongdong; Choi, Seiryung; Choi, Jinwoo; Kim, Jeahyun; Sung, Hyunju; Yeo, Hyunyoung; Shim, Myoungseob; Jin, Gyoyoung; Chung, Eunseung; Roh, Yonghan

    2013-03-01

    This paper addresses a methodology for building robust design rules by using design based metrology (DBM). Conventional method for building design rules has been using a simulation tool and a simple pattern spider mask. At the early stage of the device, the estimation of simulation tool is poor. And the evaluation of the simple pattern spider mask is rather subjective because it depends on the experiential judgment of an engineer. In this work, we designed a huge number of pattern situations including various 1D and 2D design structures. In order to overcome the difficulties of inspecting many types of patterns, we introduced Design Based Metrology (DBM) of Nano Geometry Research, Inc. And those mass patterns could be inspected at a fast speed with DBM. We also carried out quantitative analysis on PWQ silicon data to estimate process variability. Our methodology demonstrates high speed and accuracy for building design rules. All of test patterns were inspected within a few hours. Mass silicon data were handled with not personal decision but statistical processing. From the results, robust design rules are successfully verified and extracted. Finally we found out that our methodology is appropriate for building robust design rules.

  20. An NAFP Project: Use of Object Oriented Methodologies and Design Patterns to Refactor Software Design

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Baggs, Rhoda

    2007-01-01

    In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.

  1. A Progressive Damage Methodology for Residual Strength Predictions of Center-Crack Tension Composite Panels

    NASA Technical Reports Server (NTRS)

    Coats, Timothy William

    1996-01-01

    An investigation of translaminate fracture and a progressive damage methodology was conducted to evaluate and develop a residual strength prediction capability for laminated composites with through penetration notches. This is relevant to the damage tolerance of an aircraft fuselage that might suffer an in-flight accident such as an uncontained engine failure. An experimental characterization of several composite materials systems revealed an R-curve type of behavior. Fractographic examinations led to the postulate that this crack growth resistance could be due to fiber bridging, defined here as fractured fibers of one ply bridged by intact fibers of an adjacent ply. The progressive damage methodology is currently capable of predicting the initiation and growth of matrix cracks and fiber fracture. Using two difference fiber failure criteria, residual strength was predicted for different size panel widths and notch lengths. A ply discount fiber failure criterion yielded extremely conservative results while an elastic-perfectly plastic fiber failure criterion showed that the fiber bridging concept is valid for predicting residual strength for tensile dominated failure loads. Furthermore, the R-curves predicted by the model using the elastic-perfectly plastic fiber criterion compared very well with the experimental R-curves.

  2. Arab Teens Lifestyle Study (ATLS): objectives, design, methodology and implications

    PubMed Central

    Al-Hazzaa, Hazzaa M; Musaiger, Abdulrahman O

    2011-01-01

    Background There is a lack of comparable data on physical activity, sedentary behavior, and dietary habits among Arab adolescents, which limits our understanding and interpretation of the relationship between obesity and lifestyle parameters. Therefore, we initiated the Arab Teens Lifestyle Study (ATLS). The ATLS is a multicenter collaborative project for assessing lifestyle habits of Arab adolescents. The objectives of the ATLS project were to investigate the prevalence rates for overweight and obesity, physical activity, sedentary activity and dietary habits among Arab adolescents, and to examine the interrelationships between these lifestyle variables. This paper reports on the objectives, design, methodology, and implications of the ATLS. Design/Methods The ATLS is a school-based cross-sectional study involving 9182 randomly selected secondary-school students (14–19 years) from major Arab cities, using a multistage stratified sampling technique. The participating Arab cities included Riyadh, Jeddah, and Al-Khobar (Saudi Arabia), Bahrain, Dubai (United Arab Emirates), Kuwait, Amman (Jordan), Mosel (Iraq), Muscat (Oman), Tunisia (Tunisia) and Kenitra (Morocco). Measured variables included anthropometric measurements, physical activity, sedentary behavior, sleep duration, and dietary habits. Discussion The ATLS project will provide a unique opportunity to collect and analyze important lifestyle information from Arab adolescents using standardized procedures. This is the first time a collaborative Arab project will simultaneously assess broad lifestyle variables in a large sample of adolescents from numerous urbanized Arab regions. This joint research project will supply us with comprehensive and recent data on physical activity/inactivity and eating habits of Arab adolescents relative to obesity. Such invaluable lifestyle-related data are crucial for developing public health policies and regional strategies for health promotion and disease prevention. PMID

  3. Multidisciplinary design and optimization (MDO) methodology for the aircraft conceptual design

    NASA Astrophysics Data System (ADS)

    Iqbal, Liaquat Ullah

    An integrated design and optimization methodology has been developed for the conceptual design of an aircraft. The methodology brings higher fidelity Computer Aided Design, Engineering and Manufacturing (CAD, CAE and CAM) Tools such as CATIA, FLUENT, ANSYS and SURFCAM into the conceptual design by utilizing Excel as the integrator and controller. The approach is demonstrated to integrate with many of the existing low to medium fidelity codes such as the aerodynamic panel code called CMARC and sizing and constraint analysis codes, thus providing the multi-fidelity capabilities to the aircraft designer. The higher fidelity design information from the CAD and CAE tools for the geometry, aerodynamics, structural and environmental performance is provided for the application of the structured design methods such as the Quality Function Deployment (QFD) and the Pugh's Method. The higher fidelity tools bring the quantitative aspects of a design such as precise measurements of weight, volume, surface areas, center of gravity (CG) location, lift over drag ratio, and structural weight, as well as the qualitative aspects such as external geometry definition, internal layout, and coloring scheme early in the design process. The performance and safety risks involved with the new technologies can be reduced by modeling and assessing their impact more accurately on the performance of the aircraft. The methodology also enables the design and evaluation of the novel concepts such as the blended (BWB) and the hybrid wing body (HWB) concepts. Higher fidelity computational fluid dynamics (CFD) and finite element analysis (FEA) allow verification of the claims for the performance gains in aerodynamics and ascertain risks of structural failure due to different pressure distribution in the fuselage as compared with the tube and wing design. The higher fidelity aerodynamics and structural models can lead to better cost estimates that help reduce the financial risks as well. This helps in

  4. A combined stochastic feedforward and feedback control design methodology with application to autoland design

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1987-01-01

    A combined stochastic feedforward and feedback control design methodology was developed. The objective of the feedforward control law is to track the commanded trajectory, whereas the feedback control law tries to maintain the plant state near the desired trajectory in the presence of disturbances and uncertainties about the plant. The feedforward control law design is formulated as a stochastic optimization problem and is embedded into the stochastic output feedback problem where the plant contains unstable and uncontrollable modes. An algorithm to compute the optimal feedforward is developed. In this approach, the use of error integral feedback, dynamic compensation, control rate command structures are an integral part of the methodology. An incremental implementation is recommended. Results on the eigenvalues of the implemented versus designed control laws are presented. The stochastic feedforward/feedback control methodology is used to design a digital automatic landing system for the ATOPS Research Vehicle, a Boeing 737-100 aircraft. The system control modes include localizer and glideslope capture and track, and flare to touchdown. Results of a detailed nonlinear simulation of the digital control laws, actuator systems, and aircraft aerodynamics are presented.

  5. Community-wide assessment of protein-interface modeling suggests improvements to design methodology.

    PubMed

    Fleishman, Sarel J; Whitehead, Timothy A; Strauch, Eva-Maria; Corn, Jacob E; Qin, Sanbo; Zhou, Huan-Xiang; Mitchell, Julie C; Demerdash, Omar N A; Takeda-Shitaka, Mayuko; Terashi, Genki; Moal, Iain H; Li, Xiaofan; Bates, Paul A; Zacharias, Martin; Park, Hahnbeom; Ko, Jun-su; Lee, Hasup; Seok, Chaok; Bourquard, Thomas; Bernauer, Julie; Poupon, Anne; Azé, Jérôme; Soner, Seren; Ovali, Sefik Kerem; Ozbek, Pemra; Tal, Nir Ben; Haliloglu, Türkan; Hwang, Howook; Vreven, Thom; Pierce, Brian G; Weng, Zhiping; Pérez-Cano, Laura; Pons, Carles; Fernández-Recio, Juan; Jiang, Fan; Yang, Feng; Gong, Xinqi; Cao, Libin; Xu, Xianjin; Liu, Bin; Wang, Panwen; Li, Chunhua; Wang, Cunxin; Robert, Charles H; Guharoy, Mainak; Liu, Shiyong; Huang, Yangyu; Li, Lin; Guo, Dachuan; Chen, Ying; Xiao, Yi; London, Nir; Itzhaki, Zohar; Schueler-Furman, Ora; Inbar, Yuval; Potapov, Vladimir; Cohen, Mati; Schreiber, Gideon; Tsuchiya, Yuko; Kanamori, Eiji; Standley, Daron M; Nakamura, Haruki; Kinoshita, Kengo; Driggers, Camden M; Hall, Robert G; Morgan, Jessica L; Hsu, Victor L; Zhan, Jian; Yang, Yuedong; Zhou, Yaoqi; Kastritis, Panagiotis L; Bonvin, Alexandre M J J; Zhang, Weiyi; Camacho, Carlos J; Kilambi, Krishna P; Sircar, Aroop; Gray, Jeffrey J; Ohue, Masahito; Uchikoga, Nobuyuki; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka; Khashan, Raed; Bush, Stephen; Fouches, Denis; Tropsha, Alexander; Esquivel-Rodríguez, Juan; Kihara, Daisuke; Stranges, P Benjamin; Jacak, Ron; Kuhlman, Brian; Huang, Sheng-You; Zou, Xiaoqin; Wodak, Shoshana J; Janin, Joel; Baker, David

    2011-11-25

    The CAPRI (Critical Assessment of Predicted Interactions) and CASP (Critical Assessment of protein Structure Prediction) experiments have demonstrated the power of community-wide tests of methodology in assessing the current state of the art and spurring progress in the very challenging areas of protein docking and structure prediction. We sought to bring the power of community-wide experiments to bear on a very challenging protein design problem that provides a complementary but equally fundamental test of current understanding of protein-binding thermodynamics. We have generated a number of designed protein-protein interfaces with very favorable computed binding energies but which do not appear to be formed in experiments, suggesting that there may be important physical chemistry missing in the energy calculations. A total of 28 research groups took up the challenge of determining what is missing: we provided structures of 87 designed complexes and 120 naturally occurring complexes and asked participants to identify energetic contributions and/or structural features that distinguish between the two sets. The community found that electrostatics and solvation terms partially distinguish the designs from the natural complexes, largely due to the nonpolar character of the designed interactions. Beyond this polarity difference, the community found that the designed binding surfaces were, on average, structurally less embedded in the designed monomers, suggesting that backbone conformational rigidity at the designed surface is important for realization of the designed function. These results can be used to improve computational design strategies, but there is still much to be learned; for example, one designed complex, which does form in experiments, was classified by all metrics as a nonbinder.

  6. A Synergy between the Technological Process and a Methodology for Web Design: Implications for Technological Problem Solving and Design

    ERIC Educational Resources Information Center

    Jakovljevic, Maria; Ankiewicz, Piet; De swardt, Estelle; Gross, Elna

    2004-01-01

    Traditional instructional methodology in the Information System Design (ISD) environment lacks explicit strategies for promoting the cognitive skills of prospective system designers. This contributes to the fragmented knowledge and low motivational and creative involvement of learners in system design tasks. In addition, present ISD methodologies,…

  7. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  8. A mechanics framework for a progressive failure methodology for laminated composites

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Allen, David H.; Lo, David C.

    1989-01-01

    A laminate strength and life prediction methodology has been postulated for laminated composites which accounts for the progressive development of microstructural damage to structural failure. A damage dependent constitutive model predicts the stress redistribution in an average sense that accompanies damage development in laminates. Each mode of microstructural damage is represented by a second-order tensor valued internal state variable which is a strain like quantity. The mechanics framework together with the global-local strategy for predicting laminate strength and life is presented in the paper. The kinematic effects of damage are represented by effective engineering moduli in the global analysis and the results of the global analysis provide the boundary conditions for the local ply level stress analysis. Damage evolution laws are based on experimental results.

  9. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2015-05-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper.

  10. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2014-01-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper. PMID:26257939

  11. The Geothermal Progress Monitor: Design and Implementation

    SciTech Connect

    Entingh, D.J.; Lopez, A.F.; Neham, E.A.

    1981-02-01

    The Geothermal Progress Monitor (GPM) is an information system that links the various elements of the public and private sectors of the geothermal industry. The monitoring effort emphasizes the identification and analysis of indicators of what the main participants in geothermal energy utilization--field developers, energy users and government agencies--are doing to foster the discovery, confirmation and use of this resource. The major indicators considered both important and measurable are leasing activities, drilling efforts, feasibility studies, construction plans and progress, costs of installations, levels of investment, environmental study and regulatory activities, legislative status and changes, and government monetary investments in projects and activities. The GPM is unique in that it is a network, a process, a project staff and a product. As a process, the GPM identifies, acquires stores, tabulates, analyzes and reports on the information obtained through its network structure. The GPM project staff maintains the other aspects of the GPM and in particular produces pertinent analyses and responds to queries by providing information or directing the requestors to the appropriate sources. Finally, the GPM is a periodic report which summarizes activities, status and trends in the geothermal industry.

  12. Design methodology of the strength properties of medical knitted meshes

    NASA Astrophysics Data System (ADS)

    Mikołajczyk, Z.; Walkowska, A.

    2016-07-01

    One of the most important utility properties of medical knitted meshes intended for hernia and urological treatment is their bidirectional strength along the courses and wales. The value of this parameter, expected by the manufacturers and surgeons, is estimated at 100 N per 5 cm of the sample width. The most frequently, these meshes are produced on the basis of single- or double-guide stitches. They are made of polypropylene and polyester monofilament yarns with the diameter in the range from 0.6 to 1.2 mm, characterized by a high medical purity. The aim of the study was to develop the design methodology of meshes strength based on the geometrical construction of the stitch and strength of yarn. In the environment of the ProCAD warpknit 5 software the simulated stretching process of meshes together with an analysis of their geometry changes was carried out. Simulations were made for four selected representative stitches. Both on a built, unique measuring position and on the tensile testing machine the real parameters of the loops geometry of meshes were measured. Model of mechanical stretching of warp-knitted meshes along the courses and wales was developed. The thesis argument was made, that the force that breaks the loop of warp-knitted fabric is the lowest value of breaking forces of loop link yarns or yarns that create straight sections of loop. This thesis was associate with the theory of strength that uses the “the weakest link concept”. Experimental verification of model was carried out for the basic structure of the single-guide mesh. It has been shown that the real, relative strength of the mesh related to one course is equal to the strength of the yarn breakage in a loop, while the strength along the wales is close to breaking strength of a single yarn. In relation to the specific construction of the medical mesh, based on the knowledge of the density of the loops structure, the a-jour mesh geometry and the yarns strength, it is possible, with high

  13. Automating the design process - Progress, problems, prospects, potential.

    NASA Technical Reports Server (NTRS)

    Heldenfels, R. R.

    1973-01-01

    The design process for large aerospace vehicles is discussed, with particular emphasis on structural design. Problems with current procedures are identified. Then, the contributions possible from automating the design process (defined as the best combination of men and computers) are considered. Progress toward automated design in the aerospace and other communities is reviewed, including NASA studies of the potential development of Integrated Programs for Aerospace-Vehicle Design (IPAD). The need for and suggested directions of future research on the design process, both technical and social, are discussed. Although much progress has been made to exploit the computer in design, it is concluded that technology is available to begin using the computer to speed communications and management as well as calculations in the design process and thus build man-computer teams that can design better, faster and cheaper.

  14. A Formal Semantics for the SRI Hierarchical Program Design Methodology

    NASA Technical Reports Server (NTRS)

    Boyer, R. S.; Moore, J. S.

    1983-01-01

    A formal statement of what it means to use (a subset of) the methodology is presented. It is formally defined that some specified module exists and what it means to say that another module is paid correctly implemented on top of it. No attention is to motivation, either of the methodology or of the formal development of it. Concentration is entirely upon mathematical succinctness and precision. A discussion is presented of how to use certain INTERLISP programs which implement the formal definitions. Among these are a program which generates Floyd like verification conditions sufficient to imply the correctness of a module implementation.

  15. Project Icarus: Progress Report on Technical Developments and Design Considerations

    NASA Astrophysics Data System (ADS)

    Obousy, R. K.; Tziolas, A. C.; Long, K. F.; Galea, P.; Crowl, A.; Crawford, I. A.; Swinney, R.; Hein, A.; Osborne, R.; Reiss, P.

    Project Icarus is a theoretical design study of an interstellar spacecraft that is the successor to the 1970s Project Daedalus. This paper summarises some of the technical progress that has occurred since its launch in September 2009 and discusses each of the twenty research modules that define the project, encompassing all the major spacecraft systems. A number of options are currently available for the design configuration and mission profile and these are discussed prior to entering Phase IV of the design study which begins the process of down selecting design options. This paper represents a progress report on Project Icarus and is a submission of the Project Icarus Study Group.

  16. Progress Toward Improved Compact Stellarator Designs

    NASA Astrophysics Data System (ADS)

    Neilson, G. H.; Brown, T.; Gates, D.; Ku, L. P.; Lazerson, S.; Pomphrey, N.; Reiman, A.; Zarnstorff, M.; Bromberg, L.; Boozer, A.; Harris, J.

    2010-11-01

    Stellarators offer robust physics solutions for MFE challenges-- steady-state operation, disruption elimination, and high-density operation-- but require design improvements to overcome technical risks in the construction and maintenance of future large-scale stellarators. Using the ARIES-CS design (aspect ratio 4.56) as a starting point, compact stellarator designs with improved maintenance characteristics have been developed. By making the outboard legs of the main magnetic field coils nearly straight and parallel, a sector maintenance scheme compatible with high availability becomes possible. Approaches that can allow the main coil requirements to be relaxed in this way are: 1) increase aspect ratio at the expense of compactness, 2) add local removable coils in the maintenance ports for plasma shaping, and 3) use passive conducting tiles made of bulk high-temperature superconducting material to help shape the magnetic field. Such tiles would be arranged on a shaped, segmented internal support structure behind the shield.

  17. Behavioral Methodology for Designing and Evaluating Applied Programs for Women.

    ERIC Educational Resources Information Center

    Thurston, Linda P.

    To be maximally effective in solving problems, researchers must place their methodological and theoretical models of science within social and political contexts. They must become aware of biases and assumptions and move toward a more valid perception of social realities. Psychologists must view women in the situational context within which…

  18. Peer Review of a Formal Verification/Design Proof Methodology

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The role of formal verification techniques in system validation was examined. The value and the state of the art of performance proving for fault-tolerant compuers were assessed. The investigation, development, and evaluation of performance proving tools were reviewed. The technical issues related to proof methodologies are examined. The technical issues discussed are summarized.

  19. Integrated Controls-Structures Design Methodology for Flexible Spacecraft

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Joshi, S. M.; Price, D. B.

    1995-01-01

    This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.

  20. Methodology for designing accelerated aging tests for predicting life of photovoltaic arrays

    NASA Technical Reports Server (NTRS)

    Gaines, G. B.; Thomas, R. E.; Derringer, G. C.; Kistler, C. W.; Bigg, D. M.; Carmichael, D. C.

    1977-01-01

    A methodology for designing aging tests in which life prediction was paramount was developed. The methodology builds upon experience with regard to aging behavior in those material classes which are expected to be utilized as encapsulant elements, viz., glasses and polymers, and upon experience with the design of aging tests. The experiences were reviewed, and results are discussed in detail.

  1. Methodology of Computer-Aided Design of Variable Guide Vanes of Aircraft Engines

    ERIC Educational Resources Information Center

    Falaleev, Sergei V.; Melentjev, Vladimir S.; Gvozdev, Alexander S.

    2016-01-01

    The paper presents a methodology which helps to avoid a great amount of costly experimental research. This methodology includes thermo-gas dynamic design of an engine and its mounts, the profiling of compressor flow path and cascade design of guide vanes. Employing a method elaborated by Howell, we provide a theoretical solution to the task of…

  2. Progress in aircraft design since 1903

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Significant developments in aviation history are documented to show the advancements in aircraft design which have taken place since 1903. Each aircraft is identified according to the manufacturer, powerplant, dimensions, normal weight, and typical performance. A narrative summary of the major accomplishments of the aircraft is provided. Photographs of each aircraft are included.

  3. Educational Design Research: Signs of Progress

    ERIC Educational Resources Information Center

    Reeves, Thomas C.

    2015-01-01

    This special issue of the "Australasian Journal of Educational Technology" includes an introductory article by the guest editors and six papers that illustrate the potential of educational design research (EDR) to address important problems in higher education. In this final paper, reflections on the papers are made. Then the rationale…

  4. Hyperbolic tangential function-based progressive addition lens design.

    PubMed

    Qiu, Gufeng; Cui, Xudong

    2015-12-10

    The diopter distribution is key to the successful design of a progressive addition lens. A hyperbolic tangential function is then introduced to describe well the desired diopter distribution on the lens. Simulation and fabrication show that the astigmia on the whole surface is very close to the addition, exhibiting superior performance than that of currently used high-order polynomials and cosine functions. Our investigations found that once the diopter distribution design is reasonable, both the direct and indirect methods of constructing a progressive addition lens can give consistent results. With this function we are able to effectively control the design of critical areas, the position, sizes of far-view and near-view zones, as well as the channel of the lens. This study would provide an efficient way to customize different progressive lenses not only for presbyopia, but also for anti-fatigue, office progressive usages, etc. PMID:26836863

  5. Game Methodology for Design Methods and Tools Selection

    ERIC Educational Resources Information Center

    Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois

    2014-01-01

    Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…

  6. Progress in material design for biomedical applications.

    PubMed

    Tibbitt, Mark W; Rodell, Christopher B; Burdick, Jason A; Anseth, Kristi S

    2015-11-24

    Biomaterials that interface with biological systems are used to deliver drugs safely and efficiently; to prevent, detect, and treat disease; to assist the body as it heals; and to engineer functional tissues outside of the body for organ replacement. The field has evolved beyond selecting materials that were originally designed for other applications with a primary focus on properties that enabled restoration of function and mitigation of acute pathology. Biomaterials are now designed rationally with controlled structure and dynamic functionality to integrate with biological complexity and perform tailored, high-level functions in the body. The transition has been from permissive to promoting biomaterials that are no longer bioinert but bioactive. This perspective surveys recent developments in the field of polymeric and soft biomaterials with a specific emphasis on advances in nano- to macroscale control, static to dynamic functionality, and biocomplex materials.

  7. Progress in material design for biomedical applications

    PubMed Central

    Tibbitt, Mark W.; Rodell, Christopher B.; Burdick, Jason A.; Anseth, Kristi S.

    2015-01-01

    Biomaterials that interface with biological systems are used to deliver drugs safely and efficiently; to prevent, detect, and treat disease; to assist the body as it heals; and to engineer functional tissues outside of the body for organ replacement. The field has evolved beyond selecting materials that were originally designed for other applications with a primary focus on properties that enabled restoration of function and mitigation of acute pathology. Biomaterials are now designed rationally with controlled structure and dynamic functionality to integrate with biological complexity and perform tailored, high-level functions in the body. The transition has been from permissive to promoting biomaterials that are no longer bioinert but bioactive. This perspective surveys recent developments in the field of polymeric and soft biomaterials with a specific emphasis on advances in nano- to macroscale control, static to dynamic functionality, and biocomplex materials. PMID:26598696

  8. Progress in the Next Linear Collider Design

    NASA Astrophysics Data System (ADS)

    Raubenheimer, T. O.

    2001-07-01

    An electron/positron linear collider with a center-of-mass energy between 0.5 and 1 TeV would be an important complement to the physics program of the LHC. The Next Linear Collider (NLC) is being designed by a US collaboration (FNAL, LBNL, LLNL, and SLAC) which is working closely with the Japanese collaboration that is designing the Japanese Linear Collider (JLC). The NLC main linacs are based on normal conducting 11 GHz rf. This paper will discuss the technical difficulties encountered as well as the many changes that have been made to the NLC design over the last year. These changes include improvements to the X-band rf system as well as modifications to the injector and the beam delivery system. They are based on new conceptual solutions as well as results from the R&D programs which have exceeded initial specifications. The net effect has been to reduce the length of the collider from about 32 km to 25 km and to reduce the number of klystrons and modulators by a factor of two. Together these lead to significant cost savings.

  9. Analysis and design methodology for VLSI computing networks. Final report

    SciTech Connect

    Lev-Ari, H.

    1984-08-01

    Several methods for modeling and analysis of parallel algorithms and architectures have been proposed in the recent years. These include recursion-type methods, like recursion equations, z-transform descriptions and do-loops in high-level programming languages, and precedence-graph-type methods like data-flow graphs (marked graphs) and related Petri-net derived models. Most efforts have been recently directed towards developing methodologies for structured parallel algorithms and architectures and, in particular, for systolic-array-like systems. Some important properties of parallel algorithms have been identified in the process of this research effort. These include executability (the absence of deadlocks) pipelinability, regularity of structure, locality of interconnections, and dimensionality. The research has also demonstrated the feasibility of multirate systolic arrays with different rates of data propagation along different directions in the array. This final report presents a new methodology for modeling and analysis of parallel algorithms and architectures. This methodology provides a unified conceptual framework, which is called modular computing network, that clearly displays the key properties of parallel systems.

  10. New Mexico Tech Satellite Design and Progress

    NASA Astrophysics Data System (ADS)

    Landavazo, M.; Cooper, B.; Jorgensen, A. M.; Bernson, C.; Chesebrough, S.; Dang, C.; Guillette, D.; Hall, T.; Huynh, A.; Jackson, R.; Klepper, J.; MacGillivray, J.; Park, D.; Ravindran, V.; Stanton, W.; Yelton, C.; Zagrai, A. N.

    2012-12-01

    New Mexico Tech Satellite (NMTSat) is a low-budget, 3U CubeSat for correlating state-of-health information from the spacecraft with space weather in low Earth orbit (LEO). NMTSat is funded by the NASA/EPSCoR program and is built almost entirely by NMT students at the New Mexico Institute of Mining and Technology. The scientific payload of NMTSat will consist of five instruments built in-house including: a magnetometer, a Langmuir plasma probe, a dosimeter, a state-of-the-art structural health monitor and an electrical health monitor. NMTSat utilizes passive attitude control by means of a magnet and hysteresis rods and carries out attitude determination from a combination of solar panel current and magnetometer readings. NMTSat will also be built around the Space Plug-and-Play Avionics I2C interface (SPA-1) to the greatest extent practical. In this presentation we will give an overview of the NMTSat design and design-tradeoffs and provide a status report on the work of completing NMTSat.

  11. Developing a Methodology for Designing Systems of Instruction.

    ERIC Educational Resources Information Center

    Carpenter, Polly

    This report presents a description of a process for instructional system design, identification of the steps in the design process, and determination of their sequence and interrelationships. As currently envisioned, several interrelated steps must be taken, five of which provide the inputs to the final design process. There are analysis of…

  12. Ethics of Engagement: User-Centered Design and Rhetorical Methodology.

    ERIC Educational Resources Information Center

    Salvo, Michael J.

    2001-01-01

    Explores the shift from observation of users to participation with users, describing and investigating three examples of user-centered design practice in order to consider the new ethical demands being made of technical communicators. Explores Pelle Ehn's participatory design method, Roger Whitehouse's design of tactile signage for blind users,…

  13. A methodology for designing aircraft to low sonic boom constraints

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Needleman, Kathy E.

    1991-01-01

    A method for designing conceptual supersonic cruise aircraft to meet low sonic boom requirements is outlined and described. The aircraft design is guided through a systematic evolution from initial three view drawing to a final numerical model description, while the designer using the method controls the integration of low sonic boom, high supersonic aerodynamic efficiency, adequate low speed handling, and reasonable structure and materials technologies. Some experience in preliminary aircraft design and in the use of various analytical and numerical codes is required for integrating the volume and lift requirements throughout the design process.

  14. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  15. Methodological developments in US state-level Genuine Progress Indicators: toward GPI 2.0

    USGS Publications Warehouse

    Bagstad, Kenneth J.; Berik, Günseli; Gaddis, Erica J. Brown

    2014-01-01

    The Genuine Progress Indicator (GPI) has emerged as an important monetary measure of economic well-being. Unlike mainstream economic indicators, primarily Gross Domestic Product (GDP), the GPI accounts for both the benefits and costs of economic production across diverse economic, social, and environmental domains in a more comprehensive manner. Recently, the GPI has gained traction in subnational policy in the United States, with GPI studies being conducted in a number of states and with their formal adoption by several state governments. As the GPI is applied in different locations, new methods are developed, different data sources are available, and new issues of policy relevance are addressed using its component indicators. This has led to a divergence in methods, reducing comparability between studies and yielding results that are of varying methodological sophistication. In this study, we review the “state of the art” in recent US state-level GPI studies, focusing on those from Hawaii, Maryland, Ohio, Utah, and Vermont. Through adoption of a consistent approach, these and future GPI studies could utilize a framework that supports more uniform, comparable, and accurate measurements of progress. We also identify longer-term issues, particularly related to treatment of nonrenewable resource depletion, government spending, income inequality, and ecosystem services. As these issues are successfully addressed and disseminated, a “GPI 2.0” will emerge that better measures economic well-being and has greater accuracy and policy relevance than past GPI measurements. As the GPI expands further into mainstream policy analysis, a more formal process by which methods could be updated, standardized, and applied is needed.

  16. Propulsion integration of hypersonic air-breathing vehicles utilizing a top-down design methodology

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, Brad Kenneth

    In recent years, a focus of aerospace engineering design has been the development of advanced design methodologies and frameworks to account for increasingly complex and integrated vehicles. Techniques such as parametric modeling, global vehicle analyses, and interdisciplinary data sharing have been employed in an attempt to improve the design process. The purpose of this study is to introduce a new approach to integrated vehicle design known as the top-down design methodology. In the top-down design methodology, the main idea is to relate design changes on the vehicle system and sub-system level to a set of over-arching performance and customer requirements. Rather than focusing on the performance of an individual system, the system is analyzed in terms of the net effect it has on the overall vehicle and other vehicle systems. This detailed level of analysis can only be accomplished through the use of high fidelity computational tools such as Computational Fluid Dynamics (CFD) or Finite Element Analysis (FEA). The utility of the top-down design methodology is investigated through its application to the conceptual and preliminary design of a long-range hypersonic air-breathing vehicle for a hypothetical next generation hypersonic vehicle (NHRV) program. System-level design is demonstrated through the development of the nozzle section of the propulsion system. From this demonstration of the methodology, conclusions are made about the benefits, drawbacks, and cost of using the methodology.

  17. Improved FTA Methodology and Application to Subsea Pipeline Reliability Design

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhang, Mingyuan

    2014-01-01

    An innovative logic tree, Failure Expansion Tree (FET), is proposed in this paper, which improves on traditional Fault Tree Analysis (FTA). It describes a different thinking approach for risk factor identification and reliability risk assessment. By providing a more comprehensive and objective methodology, the rather subjective nature of FTA node discovery is significantly reduced and the resulting mathematical calculations for quantitative analysis are greatly simplified. Applied to the Useful Life phase of a subsea pipeline engineering project, the approach provides a more structured analysis by constructing a tree following the laws of physics and geometry. Resulting improvements are summarized in comparison table form. PMID:24667681

  18. Core melt progression and consequence analysis methodology development in support of the Savannah River Reactor PSA

    SciTech Connect

    O'Kula, K.R.; Sharp, D.A. ); Amos, C.N.; Wagner, K.C.; Bradley, D.R. )

    1992-01-01

    A three-level Probabilistic Safety Assessment (PSA) of production reactor operation has been underway since 1985 at the US Department of Energy's Savannah River Site (SRS). The goals of this analysis are to: Analyze existing margins of safety provided by the heavy-water reactor (HWR) design challenged by postulated severe accidents; Compare measures of risk to the general public and onsite workers to guideline values, as well as to those posed by commercial reactor operation; and Develop the methodology and database necessary to prioritize improvements to engineering safety systems and components, operator training, and engineering projects that contribute significantly to improving plant safety. PSA technical staff from the Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) have performed the assessment despite two obstacles: A variable baseline plant configuration and power level; and a lack of technically applicable code methodology to model the SRS reactor conditions. This paper discusses the detailed effort necessary to modify the requisite codes before accident analysis insights for the risk assessment were obtained.

  19. Innovative design methodology for implementing heterogeneous multiprocessor architectures in VLSI

    SciTech Connect

    Tientien Li

    1983-01-01

    Considering the design cost of today's VLSI systems, advanced VLSI technology may not be cost-effective for implementing complex computer systems. In the paper, an innovative design approach which can drastically reduce the cost of implementing heterogeneous multiprocessor architectures in VLSI is presented. The author introduces high-level architectural design tools for assisting the design of multiprocessor systems with distributed memory modules and communication networks, and presents a logic/firmware synthesis scheme for automatically implementing multitasking structures and system service functions for multiprocessor architectures. Furthermore, the importance of the firmware synthesis aspect of VLSI system design is emphasized. Most logic of complex VLSI systems can be implemented very easily in firmware using the design approach introduced here. 10 references.

  20. A methodology and a tool for the computer aided design with constraints of electrical devices

    SciTech Connect

    Wurtz, F.; Bigeon, J.; Poirson, C.

    1996-05-01

    A methodology for the computer aided constrained design of electrical devices is presented and validated through the design of a slotless permanent structure. It is based on the use of the analytical design equations of the device. Symbolic calculation is widely used to generate an analysis program and a sensitivity computation program. Those programs are linked with an optimization algorithm that can take constraints into account. The methodology is tested with an experimental software named PASCOSMA.

  1. Optimal Color Design of Psychological Counseling Room by Design of Experiments and Response Surface Methodology

    PubMed Central

    Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683

  2. Probabilistic Design Methodology and its Application to the Design of an Umbilical Retract Mechanism

    NASA Astrophysics Data System (ADS)

    Onyebueke, Landon; Ameye, Olusesan

    2002-10-01

    A lot has been learned from past experience with structural and machine element failures. The understanding of failure modes and the application of an appropriate design analysis method can lead to improved structural and machine element safety as well as serviceability. To apply Probabilistic Design Methodology (PDM), all uncertainties are modeled as random variables with selected distribution types, means, and standard deviations. It is quite difficult to achieve a robust design without considering the randomness of the design parameters which is the case in the use of the Deterministic Design Approach. The US Navy has a fleet of submarine-launched ballistic missiles. An umbilical plug joins the missile to the submarine in order to provide electrical and cooling water connections. As the missile leaves the submarine, an umbilical retract mechanism retracts the umbilical plug clear of the advancing missile after disengagement during launch and retrains the plug in the retracted position. The design of the current retract mechanism in use was based on the deterministic approach which puts emphasis on factor of safety. A new umbilical retract mechanism that is simpler in design, lighter in weight, more reliable, easier to adjust, and more cost effective has become desirable since this will increase the performance and efficiency of the system. This paper reports on a recent project performed at Tennessee State University for the US Navy that involved the application of PDM to the design of an umbilical retract mechanism. This paper demonstrates how the use of PDM lead to the minimization of weight and cost, and the maximization of reliability and performance.

  3. A prototype computerized synthesis methodology for generic space access vehicle (SAV) conceptual design

    NASA Astrophysics Data System (ADS)

    Huang, Xiao

    2006-04-01

    Today's and especially tomorrow's competitive launch vehicle design environment requires the development of a dedicated generic Space Access Vehicle (SAV) design methodology. A total of 115 industrial, research, and academic aircraft, helicopter, missile, and launch vehicle design synthesis methodologies have been evaluated. As the survey indicates, each synthesis methodology tends to focus on a specific flight vehicle configuration, thus precluding the key capability to systematically compare flight vehicle design alternatives. The aim of the research investigation is to provide decision-making bodies and the practicing engineer a design process and tool box for robust modeling and simulation of flight vehicles where the ultimate performance characteristics may hinge on numerical subtleties. This will enable the designer of a SAV for the first time to consistently compare different classes of SAV configurations on an impartial basis. This dissertation presents the development steps required towards a generic (configuration independent) hands-on flight vehicle conceptual design synthesis methodology. This process is developed such that it can be applied to any flight vehicle class if desired. In the present context, the methodology has been put into operation for the conceptual design of a tourist Space Access Vehicle. The case study illustrates elements of the design methodology & algorithm for the class of Horizontal Takeoff and Horizontal Landing (HTHL) SAVs. The HTHL SAV design application clearly outlines how the conceptual design process can be centrally organized, executed and documented with focus on design transparency, physical understanding and the capability to reproduce results. This approach offers the project lead and creative design team a management process and tool which iteratively refines the individual design logic chosen, leading to mature design methods and algorithms. As illustrated, the HTHL SAV hands-on design methodology offers growth

  4. Participatory Pattern Workshops: A Methodology for Open Learning Design Inquiry

    ERIC Educational Resources Information Center

    Mor, Yishay; Warburton, Steven; Winters, Niall

    2012-01-01

    In order to promote pedagogically informed use of technology, educators need to develop an active, inquisitive, design-oriented mindset. Design Patterns have been demonstrated as powerful mediators of theory-praxis conversations yet widespread adoption by the practitioner community remains a challenge. Over several years, the authors and their…

  5. Participant Observation, Anthropology Methodology and Design Anthropology Research Inquiry

    ERIC Educational Resources Information Center

    Gunn, Wendy; Løgstrup, Louise B.

    2014-01-01

    Within the design studio, and across multiple field sites, the authors compare involvement of research tools and materials during collaborative processes of designing. Their aim is to trace temporal dimensions (shifts/ movements) of where and when learning takes place along different sites of practice. They do so by combining participant…

  6. Approach, Design and Procedure: Their Role in Methodology.

    ERIC Educational Resources Information Center

    Richards, Jack C.; Rodgers, Ted

    Three interrelated pedagogical elements--approach, design, and procedure--are basic in a discussion of language teaching. Approach defines those foundational assumptions, beliefs, and theories about the nature of language and language learning. Design specifies the relationships of theories to both the form and use of instructional materials.…

  7. LWR design decision methodology: Phase II. Final report

    SciTech Connect

    1981-01-01

    Techniques were identified to augment existing design process at the component and system level in order to optimize cost and safety between alternative system designs. The method was demonstrated using the Surry Low Pressure Injection System (LPIS). Three possible backfit options were analyzed for the Surry LPIS, assessing the safety level of each option and estimating the acquisition and installation costs for each. (DLC)

  8. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    ERIC Educational Resources Information Center

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  9. Structural Design Methodology Based on Concepts of Uncertainty

    NASA Technical Reports Server (NTRS)

    Lin, K. Y.; Du, Jiaji; Rusk, David

    2000-01-01

    In this report, an approach to damage-tolerant aircraft structural design is proposed based on the concept of an equivalent "Level of Safety" that incorporates past service experience in the design of new structures. The discrete "Level of Safety" for a single inspection event is defined as the compliment of the probability that a single flaw size larger than the critical flaw size for residual strength of the structure exists, and that the flaw will not be detected. The cumulative "Level of Safety" for the entire structure is the product of the discrete "Level of Safety" values for each flaw of each damage type present at each location in the structure. Based on the definition of "Level of Safety", a design procedure was identified and demonstrated on a composite sandwich panel for various damage types, with results showing the sensitivity of the structural sizing parameters to the relative safety of the design. The "Level of Safety" approach has broad potential application to damage-tolerant aircraft structural design with uncertainty.

  10. A kind of optimizing design method of progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Tang, Yunhai; Qian, Lin; Wu, Quanying; Yu, Jingchi; Chen, Hao; Wang, Yuanyuan

    2010-10-01

    Progressive addition lenses are a kind of ophthalmic lenses with freeform surface. The surface curvature of the progressive addition lenses varies gradually from a minimum value in the upper, distance-viewing area, to a maximum value in the lower, near-viewing area. A kind of optimizing design method of progressive addition lenses is proposed to improve the optical quality by modifying the vector heights of the surface of designed progressive addition lenses initially. The relationship among mean power, cylinder power and the vector heights of the surface is deduced, and the optimizing factor is also gained. The vector heights of the surface of designed progressive addition lenses initially are used to calculate the plots of mean power and cylinder power based on the principle of differential geometry. The mean power plot is changed by adjusting the optimizing factor. Otherwise, the novel plot of the mean power can also be derived by shifting the mean power of one selected region to another and then by interpolating and smoothing. A partial differential equation of the elliptic type is founded based on the changed mean power. The solution of the equation is achieved by iterative method. The optimized vector heights of the surface are solved out. Compared with the original lens, the region in which the astigmatism near the nasal side on distance-vision portion is less than 0.5 D has become broader, and the clear regions on distance-vision and near-vision portion are wider.

  11. Design, construction, and characterization methodologies for synthetic microbial consortia.

    PubMed

    Bernstein, Hans C; Carlson, Ross P

    2014-01-01

    Engineered microbial consortia are of growing interest to a range of scientists including bioprocess engineers, systems biologists, and microbiologists because of their ability to simultaneously optimize multiple tasks, to test fundamental systems science, and to understand the microbial ecology of environments like chronic wounds. Metabolic engineering, synthetic biology, and microbial ecology provide a sound scientific basis for designing, building, and analyzing consortium-based microbial platforms.This chapter outlines strategies and protocols useful for (1) in silico network design, (2) experimental strain construction, (3) consortia culturing including biofilm growth methods, and (4) physiological characterization of consortia. The laboratory and computational methods given here may be adapted for synthesis and characterization of other engineered consortia designs.

  12. New methodology for shaft design based on life expectancy

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.

    1986-01-01

    The design of power transmission shafting for reliability has not historically received a great deal of attention. However, weight sensitive aerospace and vehicle applications and those where the penalties of shaft failure are great, require greater confidence in shaft design than earlier methods provided. This report summarizes a fatigue strength-based, design method for sizing shafts under variable amplitude loading histories for limited or nonlimited service life. Moreover, applications factors such as press-fitted collars, shaft size, residual stresses from shot peening or plating, corrosive environments can be readily accommodated into the framework of the analysis. Examples are given which illustrate the use of the method, pointing out the large life penalties due to occasional cyclic overloads.

  13. Structural design methodologies for ceramic-based material systems

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Chulya, Abhisak; Gyekenyesi, John P.

    1991-01-01

    One of the primary pacing items for realizing the full potential of ceramic-based structural components is the development of new design methods and protocols. The focus here is on low temperature, fast-fracture analysis of monolithic, whisker-toughened, laminated, and woven ceramic composites. A number of design models and criteria are highlighted. Public domain computer algorithms, which aid engineers in predicting the fast-fracture reliability of structural components, are mentioned. Emphasis is not placed on evaluating the models, but instead is focused on the issues relevant to the current state of the art.

  14. Structural design methodologies for ceramic-based material systems

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Chulya, Abhisak; Gyekenyesi, John P.

    1992-01-01

    One of the primary pacing items for realizing the full potential of ceramic-based structural components is the development of new design methods and protocols. The focus here is on low temperature, fast-fracture analysis of monolithic, whisker-toughened, laminated, and woven ceramic composites. A number of design models and criteria are highlighted. Public domain computer algorithms, which aid engineers in predicting the fast-fracture reliability of structural components, are mentioned. Emphasis is not placed on evaluating the models, but instead is focused on the issues relevant to the current state of the art.

  15. 77 FR 66471 - Methodology for Designation of Frontier and Remote Areas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HEALTH AND HUMAN SERVICES Health Resources and Services Administration Methodology for Designation of Frontier and Remote Areas AGENCY: Health Resources and Services Administration, HHS. ACTION: Request for...

  16. New Methods in Design Education: The Systemic Methodology and the Use of Sketch in the Conceptual Design Stage

    ERIC Educational Resources Information Center

    Westermeyer, Juan Carlos Briede; Ortuno, Bernabe Hernandis

    2011-01-01

    This study describes the application of a new product concurrent design methodologies in the context in the education of industrial design. The use of the sketch has been utilized many times as a tool of creative expression especially in the conceptual design stage, in an intuitive way and a little out of the context of the reality needs that the…

  17. Designing institutions and incentives in hospitals: an organization economics methodology.

    PubMed

    Eid, Florence

    2004-01-01

    Recent seminal developments in organization economics, namely the decision rights approach, offer an opportunity to shed new light on an old question, the design of effective institutions. Drawing on conclusions about how and why firm organizational boundaries change, the decision rights approach is used in this article as an analytical lens to develop a new method for assessing institutional and incentive design in restructured hospitals. The article explains the decision rights approach and shows how the Decision Rights Framework developed from it, is a way of mapping of incentive structures to allow a comparative assessment of institutional design, an understudied area, as most work on hospitals has focused on assessing equity versus efficiency tradeoffs. The new method is illustrated drawing on one example from a case study of an innovative self-corporatized hospital in Lebanon that was at the vanguard of hospital restructuring legislation, adopted for system-wide reforms. A country with a strong private sector tradition, Lebanon was fertile territory for analyzing how high-powered incentive schemes emerge from a public sector setting, in a manner similar to the evolution of a firm in reaction to market forces. Among the findings revealed by the approach is that key to "good" design is the identification of requisite incentives and the matching up of incentives with goals through decision rights allocations. The appropriate organizational form is then a logical result. PMID:15839525

  18. Designing institutions and incentives in hospitals: an organization economics methodology.

    PubMed

    Eid, Florence

    2004-01-01

    Recent seminal developments in organization economics, namely the decision rights approach, offer an opportunity to shed new light on an old question, the design of effective institutions. Drawing on conclusions about how and why firm organizational boundaries change, the decision rights approach is used in this article as an analytical lens to develop a new method for assessing institutional and incentive design in restructured hospitals. The article explains the decision rights approach and shows how the Decision Rights Framework developed from it, is a way of mapping of incentive structures to allow a comparative assessment of institutional design, an understudied area, as most work on hospitals has focused on assessing equity versus efficiency tradeoffs. The new method is illustrated drawing on one example from a case study of an innovative self-corporatized hospital in Lebanon that was at the vanguard of hospital restructuring legislation, adopted for system-wide reforms. A country with a strong private sector tradition, Lebanon was fertile territory for analyzing how high-powered incentive schemes emerge from a public sector setting, in a manner similar to the evolution of a firm in reaction to market forces. Among the findings revealed by the approach is that key to "good" design is the identification of requisite incentives and the matching up of incentives with goals through decision rights allocations. The appropriate organizational form is then a logical result.

  19. A Methodology for the Design of Learning Environments

    ERIC Educational Resources Information Center

    Page, Tom; Thorsteinsson, Gisli

    2009-01-01

    This article presents and discusses some theoretical starting points and design considerations for addressing emotional and aesthetic aspects of virtual learning environments (VLEs) for support of ubiquitous teaching, studying and learning. In this article, we note that a VLE should be viewed upon as an interactive and sensations arousing…

  20. Design Based Research Methodology for Teaching with Technology in English

    ERIC Educational Resources Information Center

    Jetnikoff, Anita

    2015-01-01

    Design based research (DBR) is an appropriate method for small scale educational research projects involving collaboration between teachers, students and researchers. It is particularly useful in collaborative projects where an intervention is implemented and evaluated in a grounded context. The intervention can be technological, or a new program…

  1. Kids in the city study: research design and methodology

    PubMed Central

    2011-01-01

    Background Physical activity is essential for optimal physical and psychological health but substantial declines in children's activity levels have occurred in New Zealand and internationally. Children's independent mobility (i.e., outdoor play and traveling to destinations unsupervised), an integral component of physical activity in childhood, has also declined radically in recent decades. Safety-conscious parenting practices, car reliance and auto-centric urban design have converged to produce children living increasingly sedentary lives. This research investigates how urban neighborhood environments can support or enable or restrict children's independent mobility, thereby influencing physical activity accumulation and participation in daily life. Methods/Design The study is located in six Auckland, New Zealand neighborhoods, diverse in terms of urban design attributes, particularly residential density. Participants comprise 160 children aged 9-11 years and their parents/caregivers. Objective measures (global positioning systems, accelerometers, geographical information systems, observational audits) assessed children's independent mobility and physical activity, neighborhood infrastructure, and streetscape attributes. Parent and child neighborhood perceptions and experiences were assessed using qualitative research methods. Discussion This study is one of the first internationally to examine the association of specific urban design attributes with child independent mobility. Using robust, appropriate, and best practice objective measures, this study provides robust epidemiological information regarding the relationships between the built environment and health outcomes for this population. PMID:21781341

  2. Serration Design Methodology for Wind Turbine Noise Reduction

    NASA Astrophysics Data System (ADS)

    Mathew, J.; Singh, A.; Madsen, J.; Arce León, C.

    2016-09-01

    Trailing edge serrations are today an established method to reduce the aeroacoustic noise from wind turbine blades. In this paper, a brief introduction to the aerodynamic and acoustic design procedure used at LM Wind Power is given. Early field tests on serrations, retrofitted to the turbine blades, gave preliminary indication of their noise reduction potential. However, a multitude of challenges stand in the way of any proof of concept and a viable commercial product. LM undertook a methodical test and validation procedure to understand the impact of design parameters on serration performance, and quantify the uncertainties associated with the proposed designs. Aerodynamic and acoustic validation tests were carried out in number of wind tunnel facilities. Models were written to predict the aerodynamic, acoustic and structural performance of the serrations. LM serration designs have evolved over the period of time to address constraints imposed by aero performance, structural reliability, manufacturing and installation. The latest LM serration offering was tested in the field on three different wind turbines. A consistent noise reduction in excess of 1.5 dB was achieved in the field for all three turbines.

  3. Situated Research Design and Methodological Choices in Formative Program Evaluation

    ERIC Educational Resources Information Center

    Supovitz, Jonathan

    2013-01-01

    Design-based implementation research offers the opportunity to rethink the relationships between intervention, research, and situation to better attune research and evaluation to the program development process. Using a heuristic called the intervention development curve, I describe the rough trajectory that programs typically follow as they…

  4. A wing design methodology for low-boom low-drag supersonic business jet

    NASA Astrophysics Data System (ADS)

    Le, Daniel B.

    2009-12-01

    The arguably most critical hindrance to the successful development of a commercial supersonic aircraft is the impact of the sonic boom signature. The sonic boom signature of a supersonic aircraft is predicted using sonic boom theory, which formulates a relationship between the complex three-dimensional geometry of the aircraft to the pressure distribution and decomposes the geometry in terms of simple geometrical components. The supersonic aircraft design process is typically based on boom minimization theory. This theory provides a theoretical equivalent area distribution which should be matched by the conceptual design in order to achieve the pre-determined sonic boom signature. The difference between the target equivalent area distribution and the actual equivalent area distribution is referred to here as the gap distribution. The primary intent of this dissertation is to provide the designer with a systematic and structured approach to designing the aircraft wings with limited changes to the baseline concept while achieving critical design goals. The design process can be easily overwhelmed and may be difficult to evaluate their effectiveness. The wing design is decoupled into two separate processes, one focused on the planform design and the other on the camber design. Moreover, this design methodology supplements the designer by allowing trade studies to be conducted between important design parameters and objectives. The wing planform design methodology incorporates a continuous gradient-based optimization scheme to supplement the design process. This is not meant to substitute the vast amount of knowledge and design decisions that are needed for a successful design. Instead, the numerical optimization helps the designer to refine creative concepts. Last, this dissertation integrates a risk mitigation scheme throughout the wing design process. The design methodology implements minimal design changes to the wing geometry white achieving the target design goal

  5. A general methodology and applications for conduction-like flow-channel design.

    SciTech Connect

    Cummings, Eric B.; Fiechtner, Gregory J.

    2004-02-01

    A novel design methodology is developed for creating conduction devices in which fields are piecewise uniform. This methodology allows the normally analytically intractable problem of Lagrangian transport to be solved using algebraic and trigonometric equations. Low-dispersion turns, manifolds, and expansions are developed. In this methodology, regions of piece-wise constant specific permeability (permeability per unit width) border each other with straight, generally tilted interfaces. The fields within each region are made uniform by satisfying a simple compatibility relation between the tilt angle and ratio of specific permeability of adjacent regions. This methodology has particular promise in the rational design of quasi-planar devices, in which the specific permeability is proportional to the depth of the channel. For such devices, the methodology can be implemented by connecting channel facets having two or more depths, fabricated, e.g., using a simple two-etch process.

  6. Designing a Methodology for Future Air Travel Scenarios

    NASA Technical Reports Server (NTRS)

    Wuebbles, Donald J.; Baughcum, Steven L.; Gerstle, John H.; Edmonds, Jae; Kinnison, Douglas E.; Krull, Nick; Metwally, Munir; Mortlock, Alan; Prather, Michael J.

    1992-01-01

    -subsonic future fleet. The methodology, procedures, and recommendations for the development of future HSCT and the subsonic fleet scenarios used for this evaluation are discussed.

  7. A Cybernetic Design Methodology for 'Intelligent' Online Learning Support

    NASA Astrophysics Data System (ADS)

    Quinton, Stephen R.

    The World Wide Web (WWW) provides learners and knowledge workers convenient access to vast stores of information, so much that present methods for refinement of a query or search result are inadequate - there is far too much potentially useful material. The problem often encountered is that users usually do not recognise what may be useful until they have progressed some way through the discovery, learning, and knowledge acquisition process. Additional support is needed to structure and identify potentially relevant information, and to provide constructive feedback. In short, support for learning is needed. The learning envisioned here is not simply the capacity to recall facts or to recognise objects. The focus is on learning that results in the construction of knowledge. Although most online learning platforms are efficient at delivering information, most do not provide tools that support learning as envisaged in this chapter. It is conceivable that Web-based learning environments can incorporate software systems that assist learners to form new associations between concepts and synthesise information to create new knowledge. This chapter details the rationale and theory behind a research study that aims to evolve Web-based learning environments into 'intelligent thinking' systems that respond to natural language human input. Rather than functioning simply as a means of delivering information, it is argued that online learning solutions will 1 day interact directly with students to support their conceptual thinking and cognitive development.

  8. One Controller at a Time (1-CAT): A mimo design methodology

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Lucas, J. C.

    1987-01-01

    The One Controller at a Time (1-CAT) methodology for designing digital controllers for Large Space Structures (LSS's) is introduced and illustrated. The flexible mode problem is first discussed. Next, desirable features of a LSS control system design methodology are delineated. The 1-CAT approach is presented, along with an analytical technique for carrying out the 1-CAT process. Next, 1-CAT is used to design digital controllers for the proposed Space Based Laser (SBL). Finally, the SBL design is evaluated for dynamical performance, noise rejection, and robustness.

  9. Application of an integrated flight/propulsion control design methodology to a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane L.

    1991-01-01

    The application of an emerging Integrated Flight/Propulsion Control design methodology to a STOVL aircraft in transition flight is reported. The methodology steps consist of: (1) design of a centralized feedback controller to provide command tracking and stability and performance robustness considering the fully integrated airframe/propulsion model as one high-order system; (2) partition of the centralized controller into a decentralized, hierarchical form compatible with implementation requirements; and (3) design of command shaping prefilters from pilot control effectors to commanded variables to provide the overall desired response to pilot inputs. Intermediate design results using this methodology are presented, the complete point control design with the propulsion system operating schedule and limit protection logic included is evaluated for sample pilot control inputs, and the response is compared with that of an 'ideal response model' derived from Level I handling qualities requirements.

  10. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  11. Software Design Methodology Migration for a Distributed Ground System

    NASA Technical Reports Server (NTRS)

    Ritter, George; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has been developed and has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes. The new Software processes still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Process have evolved highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project .

  12. Partnerships for the Design, Conduct, and Analysis of Effectiveness, and Implementation Research: Experiences of the Prevention Science and Methodology Group

    PubMed Central

    Brown, C. Hendricks; Kellam, Sheppard G.; Kaupert, Sheila; Muthén, Bengt O.; Wang, Wei; Muthén, Linda K.; Chamberlain, Patricia; PoVey, Craig L.; Cady, Rick; Valente, Thomas W.; Ogihara, Mitsunori; Prado, Guillermo J.; Pantin, Hilda M.; Gallo, Carlos G.; Szapocznik, José; Czaja, Sara J.; McManus, John W.

    2012-01-01

    What progress prevention research has made comes through strategic partnerships with communities and institutions that host this research, as well as professional and practice networks that facilitate the diffusion of knowledge about prevention. We discuss partnership issues related to the design, analysis, and implementation of prevention research and especially how rigorous designs, including random assignment, get resolved through a partnership between community stakeholders, institutions, and researchers. These partnerships shape not only study design, but they determine the data that can be collected and how results and new methods are disseminated. We also examine a second type of partnership to improve the implementation of effective prevention programs into practice. We draw on social networks to studying partnership formation and function. The experience of the Prevention Science and Methodology Group, which itself is a networked partnership between scientists and methodologists, is highlighted. PMID:22160786

  13. Partnerships for the design, conduct, and analysis of effectiveness, and implementation research: experiences of the prevention science and methodology group.

    PubMed

    Brown, C Hendricks; Kellam, Sheppard G; Kaupert, Sheila; Muthén, Bengt O; Wang, Wei; Muthén, Linda K; Chamberlain, Patricia; PoVey, Craig L; Cady, Rick; Valente, Thomas W; Ogihara, Mitsunori; Prado, Guillermo J; Pantin, Hilda M; Gallo, Carlos G; Szapocznik, José; Czaja, Sara J; McManus, John W

    2012-07-01

    What progress prevention research has made comes through strategic partnerships with communities and institutions that host this research, as well as professional and practice networks that facilitate the diffusion of knowledge about prevention. We discuss partnership issues related to the design, analysis, and implementation of prevention research and especially how rigorous designs, including random assignment, get resolved through a partnership between community stakeholders, institutions, and researchers. These partnerships shape not only study design, but they determine the data that can be collected and how results and new methods are disseminated. We also examine a second type of partnership to improve the implementation of effective prevention programs into practice. We draw on social networks to studying partnership formation and function. The experience of the Prevention Science and Methodology Group, which itself is a networked partnership between scientists and methodologists, is highlighted.

  14. Weibull-Based Design Methodology for Rotating Aircraft Engine Structures

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry

    2002-01-01

    The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.

  15. Design-Based Research: Is This a Suitable Methodology for Short-Term Projects?

    ERIC Educational Resources Information Center

    Pool, Jessica; Laubscher, Dorothy

    2016-01-01

    This article reports on a design-based methodology of a thesis in which a fully face-to-face contact module was converted into a blended learning course. The purpose of the article is to report on how design-based phases, in the form of micro-, meso- and macro-cycles were applied to improve practice and to generate design principles. Design-based…

  16. Advanced piloted aircraft flight control system design methodology. Volume 2: The FCX flight control design expert system

    NASA Technical Reports Server (NTRS)

    Myers, Thomas T.; Mcruer, Duane T.

    1988-01-01

    The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design states starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. The FCX expert system as presently developed is only a limited prototype capable of supporting basic lateral-directional FCS design activities related to the design example used. FCX presently supports design of only one FCS architecture (yaw damper plus roll damper) and the rules are largely focused on Class IV (highly maneuverable) aircraft. Despite this limited scope, the major elements which appear necessary for application of knowledge-based software concepts to flight control design were assembled and thus FCX represents a prototype which can be tested, critiqued and evolved in an ongoing process of development.

  17. Analysis and Design of Fuselage Structures Including Residual Strength Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.

    1998-01-01

    The goal of this research project is to develop and assess methodologies for the design and analysis of fuselage structures accounting for residual strength. Two primary objectives are included in this research activity: development of structural analysis methodology for predicting residual strength of fuselage shell-type structures; and the development of accurate, efficient analysis, design and optimization tool for fuselage shell structures. Assessment of these tools for robustness, efficient, and usage in a fuselage shell design environment will be integrated with these two primary research objectives.

  18. A User-Centered Methodological Framework for the Design of Hypermedia-based CALL Systems.

    ERIC Educational Resources Information Center

    Shin, Jae-eun; Wastell, David G.

    2001-01-01

    Discusses research aimed at improving the educational quality of hypermedia-based computer assisted language learning systems. Focuses on a methodological framework that draws on recent developments in the field of human-computer interaction regarding interactive system design and a general constructivist approach to the design of computer-based…

  19. A Step-by-Step Design Methodology for a Base Case Vanadium Redox-Flow Battery

    ERIC Educational Resources Information Center

    Moore, Mark; Counce, Robert M.; Watson, Jack S.; Zawodzinski, Thomas A.; Kamath, Haresh

    2012-01-01

    The purpose of this work is to develop an evolutionary procedure to be used by Chemical Engineering students for the base-case design of a Vanadium Redox-Flow Battery. The design methodology is based on the work of Douglas (1985) and provides a profitability analysis at each decision level so that more profitable alternatives and directions can be…

  20. Designing Trend-Monitoring Sounds for Helicopters: Methodological Issues and an Application

    ERIC Educational Resources Information Center

    Edworthy, Judy; Hellier, Elizabeth; Aldrich, Kirsteen; Loxley, Sarah

    2004-01-01

    This article explores methodological issues in sonification and sound design arising from the design of helicopter monitoring sounds. Six monitoring sounds (each with 5 levels) were tested for similarity and meaning with 3 different techniques: hierarchical cluster analysis, linkage analysis, and multidimensional scaling. In Experiment 1,…

  1. Three-dimensional viscous design methodology for advanced technology aircraft supersonic inlet systems

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.

    1983-01-01

    A broad program to develop advanced, reliable, and user oriented three-dimensional viscous design techniques for supersonic inlet systems, and encourage their transfer into the general user community is discussed. Features of the program include: (1) develop effective methods of computing three-dimensional flows within a zonal modeling methodology; (2) ensure reasonable agreement between said analysis and selective sets of benchmark validation data; (3) develop user orientation into said analysis; and (4) explore and develop advanced numerical methodology.

  2. Application of an Integrated Methodology for Propulsion and Airframe Control Design to a STOVL Aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane

    1994-01-01

    An advanced methodology for integrated flight propulsion control (IFPC) design for future aircraft, which will use propulsion system generated forces and moments for enhanced maneuver capabilities, is briefly described. This methodology has the potential to address in a systematic manner the coupling between the airframe and the propulsion subsystems typical of such enhanced maneuverability aircraft. Application of the methodology to a short take-off vertical landing (STOVL) aircraft in the landing approach to hover transition flight phase is presented with brief description of the various steps in the IFPC design methodology. The details of the individual steps have been described in previous publications and the objective of this paper is to focus on how the components of the control system designed at each step integrate into the overall IFPC system. The full nonlinear IFPC system was evaluated extensively in nonreal-time simulations as well as piloted simulations. Results from the nonreal-time evaluations are presented in this paper. Lessons learned from this application study are summarized in terms of areas of potential improvements in the STOVL IFPC design as well as identification of technology development areas to enhance the applicability of the proposed design methodology.

  3. Advanced piloted aircraft flight control system design methodology. Volume 1: Knowledge base

    NASA Technical Reports Server (NTRS)

    Mcruer, Duane T.; Myers, Thomas T.

    1988-01-01

    The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design stages starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. While theory and associated computational means are an important aspect of the design methodology, the lore, knowledge and experience elements, which guide and govern applications are critical features. This material is presented as summary tables, outlines, recipes, empirical data, lists, etc., which encapsulate a great deal of expert knowledge. Much of this is presented in topical knowledge summaries which are attached as Supplements. The composite of the supplements and the main body elements constitutes a first cut at a a Mark 1 Knowledge Base for manned-aircraft flight control.

  4. Monitoring Progress in Child Poverty Reduction: Methodological Insights and Illustration to the Case Study of Bangladesh

    ERIC Educational Resources Information Center

    Roche, Jose Manuel

    2013-01-01

    Important steps have been taken at international summits to set up goals and targets to improve the wellbeing of children worldwide. Now the world also has more and better data to monitor progress. This paper presents a new approach to monitoring progress in child poverty reduction based on the Alkire and Foster adjusted headcount ratio and an…

  5. Integrated Controls-Structures Design Methodology: Redesign of an Evolutionary Test Structure

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Joshi, Suresh M.

    1997-01-01

    An optimization-based integrated controls-structures design methodology for a class of flexible space structures is described, and the phase-0 Controls-Structures-Integration evolutionary model, a laboratory testbed at NASA Langley, is redesigned using this integrated design methodology. The integrated controls-structures design is posed as a nonlinear programming problem to minimize the control effort required to maintain a specified line-of-sight pointing performance, under persistent white noise disturbance. Static and dynamic dissipative control strategies are employed for feedback control, and parameters of these controllers are considered as the control design variables. Sizes of strut elements in various sections of the CEM are used as the structural design variables. Design guides for the struts are developed and employed in the integrated design process, to ensure that the redesigned structure can be effectively fabricated. The superiority of the integrated design methodology over the conventional design approach is demonstrated analytically by observing a significant reduction in the average control power needed to maintain specified pointing performance with the integrated design approach.

  6. Co-design of RAD and ETHICS methodologies: a combination of information system development methods

    NASA Astrophysics Data System (ADS)

    Nasehi, Arezo; Shahriyari, Salman

    2011-12-01

    Co-design is a new trend in the social world which tries to capture different ideas in order to use the most appropriate features for a system. In this paper, co-design of two information system methodologies is regarded; rapid application development (RAD) and effective technical and human implementation of computer-based systems (ETHICS). We tried to consider the characteristics of these methodologies to see the possibility of having a co-design or combination of them for developing an information system. To reach this purpose, four different aspects of them are analyzed: social or technical approach, user participation and user involvement, job satisfaction, and overcoming change resistance. Finally, a case study using the quantitative method is analyzed in order to examine the possibility of co-design using these factors. The paper concludes that RAD and ETHICS are appropriate to be co-designed and brings some suggestions for the co-design.

  7. Design and tolerance analysis of a low bending loss hole-assisted fiber using statistical design methodology.

    PubMed

    Van Erps, Jürgen; Debaes, Christof; Nasilowski, Tomasz; Watté, Jan; Wojcik, Jan; Thienpont, Hugo

    2008-03-31

    We present the design of a low bending loss hole-assisted fiber for a 180?-bend fiber socket application, including a tolerance analysis for manufacturability. To this aim, we make use of statistical design methodology, combined with a fully vectorial mode solver. Two resulting designs are presented and their performance in terms of bending loss, coupling loss to Corning SMF-28 standard telecom fiber, and cut-off wavelength is calculated.

  8. Efficient preliminary floating offshore wind turbine design and testing methodologies and application to a concrete spar design

    PubMed Central

    Matha, Denis; Sandner, Frank; Molins, Climent; Campos, Alexis; Cheng, Po Wen

    2015-01-01

    The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms. PMID:25583870

  9. Efficient preliminary floating offshore wind turbine design and testing methodologies and application to a concrete spar design.

    PubMed

    Matha, Denis; Sandner, Frank; Molins, Climent; Campos, Alexis; Cheng, Po Wen

    2015-02-28

    The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms.

  10. Efficient preliminary floating offshore wind turbine design and testing methodologies and application to a concrete spar design.

    PubMed

    Matha, Denis; Sandner, Frank; Molins, Climent; Campos, Alexis; Cheng, Po Wen

    2015-02-28

    The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms. PMID:25583870

  11. Progressive addition lens design by optimizing NURBS surface

    NASA Astrophysics Data System (ADS)

    Liu, Yen-Liang; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Su, Guo-Dung

    2011-10-01

    Progressive addition lenses (PAL) are used to compensate presbyopia, which is induced by losing accommodation of elder eyes. These eyes need different optical power provided by eye glasses while watching objects at different distance. A smaller optical power is required in further distance and a larger one in nearer zone. A progressive addition lens can provides different power requirements in one piece of lens. This paper introduces a whole process of PAL production, from design, fabrication, to measurement. The PAL is designed by optimizing NURBS surface. Parameters of merit function are adjusted to design lenses with different specifications. The simulation results confirm that the power distributes as expected and cylinders are controlled under an acceptable level. Besides, sample lenses have been fabricated and measured. We apply precise-machining to produce the molds for plastic injection. Then, the samples are produced by injecting polycorbonate to the molds. Finally, Ultra Accuracy 3D Profilemeter is used to measure the sample PALs. Practical examinations shows that our designs are achievable and feasible in practice use.

  12. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  13. Compact DEMO, SlimCS: design progress and issues

    NASA Astrophysics Data System (ADS)

    Tobita, K.; Nishio, S.; Enoeda, M.; Kawashima, H.; Kurita, G.; Tanigawa, H.; Nakamura, H.; Honda, M.; Saito, A.; Sato, S.; Hayashi, T.; Asakura, N.; Sakurai, S.; Nishitani, T.; Ozeki, T.; Ando, M.; Ezato, K.; Hamamatsu, K.; Hirose, T.; Hoshino, T.; Ide, S.; Inoue, T.; Isono, T.; Liu, C.; Kakudate, S.; Kawamura, Y.; Mori, S.; Nakamichi, M.; Nishi, H.; Nozawa, T.; Ochiai, K.; Ogiwara, H.; Oyama, N.; Sakamoto, K.; Sakamoto, Y.; Seki, Y.; Shibama, Y.; Shimizu, K.; Suzuki, S.; Takahashi, K.; Tanigawa, H.; Tsuru, D.; Yamanishi, T.; Yoshida, T.

    2009-07-01

    The design progress in a compact low aspect ratio (low A) DEMO reactor, 'SlimCS', and its design issues are reported. The design study focused mainly on the torus configuration including the blanket, divertor, materials and maintenance scheme. For continuity with the Japanese ITER-TBM, the blanket is based on a water-cooled solid breeder blanket. For vertical stability of the elongated plasma and high beta access, the blanket is segmented into replaceable and permanent blankets and a sector-wide conducting shell is arranged inbetween these blankets. A numerical calculation indicates that fuel self-sufficiency can be satisfied when the blanket interior is ideally fabricated. An allowable heat load to the divertor plate should be 8 MW m-2 or lower, which can be a critical constraint for determining a handling power of DEMO.

  14. Failure: A Source of Progress in Maintenance and Design

    NASA Astrophysics Data System (ADS)

    Chaïb, R.; Taleb, M.; Benidir, M.; Verzea, I.; Bellaouar, A.

    This approach, allows using the failure as a source of progress in maintenance and design to detect the most critical components in equipment, to determine the priority order maintenance actions to lead and direct the exploitation procedure towards the most penalizing links in this equipment, even define the necessary changes and recommendations for future improvement. Thus, appreciate the pathological behaviour of the material and increase its availability, even increase its lifespan and improve its future design. In this context and in the light of these points, the failures are important in managing the maintenance function. Indeed, it has become important to understand the phenomena of failure and degradation of equipments in order to establish an appropriate maintenance policy for the rational use of mechanical components and move to the practice of proactive maintenance [1], do maintenance at the design [2].

  15. Space station definitions, design, and development. Task 5: Multiple arm telerobot coordination and control: Manipulator design methodology

    NASA Technical Reports Server (NTRS)

    Stoughton, R. M.

    1990-01-01

    A proposed methodology applicable to the design of manipulator systems is described. The current design process is especially weak in the preliminary design phase, since there is no accepted measure to be used in trading off different options available for the various subsystems. The design process described uses Cartesian End-Effector Impedance as a measure of performance for the system. Having this measure of performance, it is shown how it may be used to determine the trade-offs necessary to the preliminary design phase. The design process involves three main parts: (1) determination of desired system performance in terms of End-Effector Impedance; (2) trade-off design options to achieve this desired performance; and (3) verification of system performance through laboratory testing. The design process is developed using numerous examples and experiments to demonstrate the feasability of this approach to manipulator design.

  16. The Progression of Prospective Primary Teachers' Conceptions of the Methodology of Teaching

    ERIC Educational Resources Information Center

    Rivero, Ana; Azcarate, Pilar; Porlan, Rafael; del Pozo, Rosa Martin; Harres, Joao

    2011-01-01

    This article describes the evolution of prospective primary teachers' conceptions of the methodology of teaching. Three categories were analyzed: the concept of activity, the organization of activities, and the concept of teaching resources. The study was conducted with five teams of prospective teachers, who were participating in teacher…

  17. A cost-effective methodology for the design of massively-parallel VLSI functional units

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Sriram, G.; Desouza, J.

    1993-01-01

    In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.

  18. Cyber-Informed Engineering: The Need for a New Risk Informed and Design Methodology

    SciTech Connect

    Price, Joseph Daniel; Anderson, Robert Stephen

    2015-06-01

    Current engineering and risk management methodologies do not contain the foundational assumptions required to address the intelligent adversary’s capabilities in malevolent cyber attacks. Current methodologies focus on equipment failures or human error as initiating events for a hazard, while cyber attacks use the functionality of a trusted system to perform operations outside of the intended design and without the operator’s knowledge. These threats can by-pass or manipulate traditionally engineered safety barriers and present false information, invalidating the fundamental basis of a safety analysis. Cyber threats must be fundamentally analyzed from a completely new perspective where neither equipment nor human operation can be fully trusted. A new risk analysis and design methodology needs to be developed to address this rapidly evolving threatscape.

  19. BEAM STOP DESIGN METHODOLOGY AND DESCRIPTION OF A NEW SNS BEAM STOP

    SciTech Connect

    Polsky, Yarom; Plum, Michael A; Geoghegan, Patrick J; Jacobs, Lorelei L; Lu, Wei; McTeer, Stephen Mark

    2010-01-01

    The design of accelerator components such as magnets, accelerator cavities and beam instruments tends to be a fairly standardized and collective effort within the particle accelerator community with well established performance, reliability and, in some cases, even budgetary criteria. Beam stop design, by contrast, has been comparatively subjective historically with much more general goals. This lack of rigor has lead to a variety of facility implementations with limited standardization and minimal consensus on approach to development within the particle accelerator community. At the Spallation Neutron Source (SNS), for example, there are four high power beam stops in use, three of which have significantly different design solutions. This paper describes the design of a new off-momentum beam stop for the SNS. The technical description of the system will be complemented by a discussion of design methodology. This paper presented an overview of the new SNS HEBT off-momentum beam stop and outlined a methodology for beam stop system design. The new beam stop consists of aluminium and steel blocks cooled by a closed-loop forced-air system and is expected to be commissioned this summer. The design methodology outlined in the paper represents a basic description of the process, data, analyses and critical decisions involved in the development of a beam stop system.

  20. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  1. Gaining Methodological Insight through the Use of Single Subject Designs in Hearing-Impaired Classrooms.

    ERIC Educational Resources Information Center

    Luetke-Stahlman, Barbara

    1986-01-01

    Teachers should familiarize themselves with the practical advantages of single-subject methodology in attempting to meet the various learning needs of hearing-impaired students. Two examples are provided to illustrate how these designs can act as tools for deciding which of several teaching methods are most beneficial to particular students.…

  2. A Methodological Framework for Instructional Design Model Development: Critical Dimensions and Synthesized Procedures

    ERIC Educational Resources Information Center

    Lee, Jihyun; Jang, Seonyoung

    2014-01-01

    Instructional design (ID) models have been developed to promote understandings of ID reality and guide ID performance. As the number and diversity of ID practices grows, implicit doubts regarding the reliability, validity, and usefulness of ID models suggest the need for methodological guidance that would help to generate ID models that are…

  3. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  4. IDR: A Participatory Methodology for Interdisciplinary Design in Technology Enhanced Learning

    ERIC Educational Resources Information Center

    Winters, Niall; Mor, Yishay

    2008-01-01

    One of the important themes that emerged from the CAL'07 conference was the failure of technology to bring about the expected disruptive effect to learning and teaching. We identify one of the causes as an inherent weakness in prevalent development methodologies. While the problem of designing technology for learning is irreducibly…

  5. Fundamentals of clinical outcomes assessment for spinal disorders: study designs, methodologies, and analyses.

    PubMed

    Vavken, Patrick; Ganal-Antonio, Anne Kathleen B; Shen, Francis H; Chapman, Jens R; Samartzis, Dino

    2015-04-01

    Study Design A broad narrative review. Objective Management of spinal disorders is continuously evolving, with new technologies being constantly developed. Regardless, assessment of patient outcomes is key in understanding the safety and efficacy of various therapeutic interventions. As such, evidence-based spine care is an essential component to the armamentarium of the spine specialist in an effort to critically analyze the reported literature and execute studies in an effort to improve patient care and change clinical practice. The following article, part one of a two-part series, is meant to bring attention to the pros and cons of various study designs, their methodological issues, as well as statistical considerations. Methods An extensive review of the peer-reviewed literature was performed, irrespective of language of publication, addressing study designs and their methodologies as well as statistical concepts. Results Numerous articles and concepts addressing study designs and their methodological considerations as well as statistical analytical concepts have been reported. Their applications in the context of spine-related conditions and disorders were noted. Conclusion Understanding the fundamental principles of study designs and their methodological considerations as well as statistical analyses can further advance and improve future spine-related research.

  6. The Case in Case-Based Design of Educational Software: A Methodological Interrogation

    ERIC Educational Resources Information Center

    Khan, S.

    2008-01-01

    This research assessed the value of case study methodology in the design of an educational computer simulation. Three sources of knowledge were compared to assess the value of case study: practitioner and programmer knowledge, disciplinary knowledge, and knowledge obtained from a case study of teacher practice. A retrospective analysis revealed…

  7. QFD: a methodological tool for integration of ergonomics at the design stage.

    PubMed

    Marsot, Jacques

    2005-03-01

    As a marked increase in the number of musculoskeletal disorders was noted in many industrialized countries and more specifically in companies that require the use of hand tools, the French National Research and Safety Institute launched in 1999 a research program on the topic of integrating ergonomics into hand tool design. After a brief review of the problems of integrating ergonomics at the design stage, the paper shows how the "Quality Function Deployment" method has been applied to the design of a boning knife and it highlights the difficulties encountered. Then, it demonstrates how this method can be a methodological tool geared to greater ergonomics consideration in product design. PMID:15694072

  8. A low-power photovoltaic system with energy storage for radio communications: description and design methodology

    SciTech Connect

    Chapman, C.P.; Chapman, P.D.

    1982-01-01

    A low power photovoltaic system was constructed with approximately 500 amp hours of battery energy storage to provide power to an emergency amateur radio communications center. The system can power the communications center for about 72 hours of continuous nonsun operation. Complete construction details and a design methodology algorithm are given with abundant engineering data and adequate theory to allow similar systems to be constructed, scaled up or down, with minimum design effort.

  9. Low-power photovoltaic system with energy storage for radio communications. Description and design methodology

    SciTech Connect

    Chapman, C.P.; Chapman, P.D.; Lewison, A.H.

    1982-01-15

    A low-power photovoltaic system was constructed with approximately 500 amp-hours of battery energy storage to provide power to an emergency amateur radio communications center. The system can power the communications center for about 72 hours of continuous no-sun operation. Complete construction details and a design methodology algorithm are given with abundant engineering data and adequate theory to allow similar systems to be constructed, scaled up or down, with minimum design effort.

  10. A low-power photovoltaic system with energy storage for radio communications: Description and design methodology

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.; Chapman, P. D.; Lewison, A. H.

    1982-01-01

    A low power photovoltaic system was constructed with approximately 500 amp hours of battery energy storage to provide power to an emergency amateur radio communications center. The system can power the communications center for about 72 hours of continuous nonsun operation. Complete construction details and a design methodology algorithm are given with abundant engineering data and adequate theory to allow similar systems to be constructed, scaled up or down, with minimum design effort.

  11. Design Methodology for Multi-Element High-Lift Systems on Subsonic Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pepper, R. S.; vanDam, C. P.

    1996-01-01

    The choice of a high-lift system is crucial in the preliminary design process of a subsonic civil transport aircraft. Its purpose is to increase the allowable aircraft weight or decrease the aircraft's wing area for a given takeoff and landing performance. However, the implementation of a high-lift system into a design must be done carefully, for it can improve the aerodynamic performance of an aircraft but may also drastically increase the aircraft empty weight. If designed properly, a high-lift system can improve the cost effectiveness of an aircraft by increasing the payload weight for a given takeoff and landing performance. This is why the design methodology for a high-lift system should incorporate aerodynamic performance, weight, and cost. The airframe industry has experienced rapid technological growth in recent years which has led to significant advances in high-lift systems. For this reason many existing design methodologies have become obsolete since they are based on outdated low Reynolds number wind-tunnel data and can no longer accurately predict the aerodynamic characteristics or weight of current multi-element wings. Therefore, a new design methodology has been created that reflects current aerodynamic, weight, and cost data and provides enough flexibility to allow incorporation of new data when it becomes available.

  12. Role of ligand-based drug design methodologies toward the discovery of new anti- Alzheimer agents: futures perspectives in Fragment-Based Ligand Design.

    PubMed

    Speck-Planche, A; Luan, F; Cordeiro, M N D S

    2012-01-01

    Alzheimer's disease (AD), a degenerative disease affecting the brain, is the single most common source of dementia in adults. The cause and the progression of AD still remains a mystery among medical experts. As a result, a cure has not yet been discovered, even after decade's worth of research that started since 1906, when the disease was first identified. Despite the efforts of the scientific community, several of the biological receptors associated with AD have not been sufficiently studied to date, limiting in turn the design of new and more potent anti-AD agents. Thus, the search for new drug candidates as inhibitors of different targets associated with AD constitutes an essential part towards the discovery of new and more efficient anti-AD therapies. The present work is focused on the role of the Ligand-Based Drug Design (LBDD) methodologies which have been applied for the elucidation of new molecular entities with high inhibitory activity against targets related with AD. Particular emphasis is given also to the current state of fragment-based ligand approaches as alternatives of the Fragment-Based Drug Discovery (FBDD) methodologies. Finally, several guidelines are offered to show how the use of fragment-based descriptors can be determinant for the design of multi-target inhibitors of proteins associated with AD. PMID:22376033

  13. Three-dimensional viscous design methodology for advanced technology aircraft supersonic inlet systems

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.

    1984-01-01

    A broad program to develop advanced, reliable, and user oriented three-dimensional viscous design techniques for supersonic inlet systems, and encourage their transfer into the general user community is discussed. Features of the program include: (1) develop effective methods of computing three-dimensional flows within a zonal modeling methodology; (2) ensure reasonable agreement between said analysis and selective sets of benchmark validation data; (3) develop user orientation into said analysis; and (4) explore and develop advanced numerical methodology. Previously announced in STAR as N84-13190

  14. Novel thermal management system design methodology for power lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Nieto, Nerea; Díaz, Luis; Gastelurrutia, Jon; Blanco, Francisco; Ramos, Juan Carlos; Rivas, Alejandro

    2014-12-01

    Battery packs conformed by large format lithium-ion cells are increasingly being adopted in hybrid and pure electric vehicles in order to use the energy more efficiently and for a better environmental performance. Safety and cycle life are two of the main concerns regarding this technology, which are closely related to the cell's operating behavior and temperature asymmetries in the system. Therefore, the temperature of the cells in battery packs needs to be controlled by thermal management systems (TMSs). In the present paper an improved design methodology for developing TMSs is proposed. This methodology involves the development of different mathematical models for heat generation, transmission, and dissipation and their coupling and integration in the battery pack product design methodology in order to improve the overall safety and performance. The methodology is validated by comparing simulation results with laboratory measurements on a single module of the battery pack designed at IK4-IKERLAN for a traction application. The maximum difference between model predictions and experimental temperature data is 2 °C. The models developed have shown potential for use in battery thermal management studies for EV/HEV applications since they allow for scalability with accuracy and reasonable simulation time.

  15. Aero-Mechanical Design Methodology for Subsonic Civil Transport High-Lift Systems

    NASA Technical Reports Server (NTRS)

    vanDam, C. P.; Shaw, S. G.; VanderKam, J. C.; Brodeur, R. R.; Rudolph, P. K. C.; Kinney, D.

    2000-01-01

    In today's highly competitive and economically driven commercial aviation market, the trend is to make aircraft systems simpler and to shorten their design cycle which reduces recurring, non-recurring and operating costs. One such system is the high-lift system. A methodology has been developed which merges aerodynamic data with kinematic analysis of the trailing-edge flap mechanism with minimum mechanism definition required. This methodology provides quick and accurate aerodynamic performance prediction for a given flap deployment mechanism early on in the high-lift system preliminary design stage. Sample analysis results for four different deployment mechanisms are presented as well as descriptions of the aerodynamic and mechanism data required for evaluation. Extensions to interactive design capabilities are also discussed.

  16. Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling.

    PubMed

    Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro

    2016-03-11

    This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator.

  17. Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling.

    PubMed

    Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro

    2016-01-01

    This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator. PMID:26978370

  18. Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling

    PubMed Central

    Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro

    2016-01-01

    This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator. PMID:26978370

  19. The Progression of Prospective Primary Teachers' Conceptions of the Methodology of Teaching

    NASA Astrophysics Data System (ADS)

    Rivero, Ana; Azcárate, Pilar; Porlán, Rafael; Martín Del Pozo, Rosa; Harres, Joao

    2011-11-01

    This article describes the evolution of prospective primary teachers' conceptions of the methodology of teaching. Three categories were analyzed: the concept of activity, the organization of activities, and the concept of teaching resources. The study was conducted with five teams of prospective teachers, who were participating in teacher education courses of a constructivist orientation. The results showed very different itineraries in the processes of change, and the presence of two major obstacles—the belief that teaching is the direct cause of learning, and epistemological absolutism. The study allows us to deduce some implications for initial teacher education.

  20. A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Keller, Dennis J.

    2001-01-01

    It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and

  1. A methodology for robust structural design with application to active aeroelastic wings

    NASA Astrophysics Data System (ADS)

    Zink, Paul Scott

    A new design process for Active Aeroelastic Wing (AAW) technology was developed, in which control surface gear ratios and structural design variables were treated together in the same optimization problem, acting towards the same objective of weight minimization. This is in contrast to traditional AAW design processes that treat design of the gear ratios and design of the structure as separate optimization problems, each with their own different objectives and constraints, executed in an iterative fashion. The demonstration of the new AAW design process, implemented in an efficient modal-based structural analysis and optimization code, on a lightweight fighter resulted in a 15% reduction in wing box skin weight over a more traditional AAW design process. In addition, the new process was far more streamlined than the traditional approach in that it was performed in one continuous run and did not require the exchange of data between modules. The new AAW design process was then used in the development of a methodology for the design of AAW structures that are robust to uncertainty in maneuver loads which arise from the use of linear aerodynamics. Maneuver load uncertainty was modeled probabilistically and based on typical differences between rigid loads as predicted by nonlinear and linear aerodynamic theory. These models were used to augment the linear aerodynamic loads that had been used in the AAW design process. Characteristics of the robust design methodology included: use of a criticality criterion based on a strain energy formulation to determine what loads were most critical to the structure, Latin Hypercube Sampling for the propagation of uncertainty to the criterion function, and redesign of the structure, using the new AAW design process, to the most critical loads identified. The demonstration of the methodology resulted in a wing box skin structure that was 11% heavier than an AAW structure designed only with linear aerodynamics. However, it was

  2. Methodology to design a municipal solid waste pre-collection system. A case study

    SciTech Connect

    Gallardo, A. Carlos, M. Peris, M. Colomer, F.J.

    2015-02-15

    Highlights: • MSW recovery starts at homes; therefore it is important to facilitate it to people. • Additionally, to optimize MSW collection a previous pre-collection must be planned. • A methodology to organize pre-collection considering several factors is presented. • The methodology has been verified applying it to a Spanish middle town. - Abstract: The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consists in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has

  3. A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Moffitt, Blake Almy

    Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are

  4. Methodology for the Design of Streamline-Traced External-Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2014-01-01

    A design methodology based on streamline-tracing is discussed for the design of external-compression, supersonic inlets for flight below Mach 2.0. The methodology establishes a supersonic compression surface and capture cross-section by tracing streamlines through an axisymmetric Busemann flowfield. The compression system of shock and Mach waves is altered through modifications to the leading edge and shoulder of the compression surface. An external terminal shock is established to create subsonic flow which is diffused in the subsonic diffuser. The design methodology was implemented into the SUPIN inlet design tool. SUPIN uses specified design factors to design the inlets and computes the inlet performance, which includes the flow rates, total pressure recovery, and wave drag. A design study was conducted using SUPIN and the Wind-US computational fluid dynamics code to design and analyze the properties of two streamline-traced, external-compression (STEX) supersonic inlets for Mach 1.6 freestream conditions. The STEX inlets were compared to axisymmetric pitot, two-dimensional, and axisymmetric spike inlets. The STEX inlets had slightly lower total pressure recovery and higher levels of total pressure distortion than the axisymmetric spike inlet. The cowl wave drag coefficients of the STEX inlets were 20% of those for the axisymmetric spike inlet. The STEX inlets had external sound pressures that were 37% of those of the axisymmetric spike inlet, which may result in lower adverse sonic boom characteristics. The flexibility of the shape of the capture cross-section may result in benefits for the integration of STEX inlets with aircraft.

  5. A knowledge management methodology for the integrated assessment of WWTP configurations during conceptual design.

    PubMed

    Garrido-Baserba, M; Reif, R; Rodriguez-Roda, I; Poch, M

    2012-01-01

    The current complexity involved in wastewater management projects is arising as the XXI century sets new challenges leading towards a more integrated plant design. In this context, the growing number of innovative technologies, stricter legislation and the development of new methodological approaches make it difficult to design appropriate flow schemes for new wastewater projects. Thus, new tools are needed for the wastewater treatment plant (WWTP) conceptual design using integrated assessment methods in order to include different types of objectives at the same time i.e. environmental, economical, technical, and legal. Previous experiences used the decision support system (DSS) methodology to handle the specific issues related to wastewater management, for example, the design of treatment facilities for small communities. However, tools developed for addressing the whole treatment process independently of the plant size, capable of integrating knowledge from many different areas, including both conventional and innovative technologies are not available. Therefore, the aim of this paper is to present and describe an innovative knowledge-based methodology that handles the conceptual design of WWTP process flow-diagrams (PFDs), satisfying a vast number of different criteria. This global approach is based on a hierarchy of decisions that uses the information contained in knowledge bases (KBs) with the aim of automating the generation of suitable WWTP configurations for a specific scenario. Expert interviews, legislation, specialized literature and engineering experience have been integrated within the different KBs, which indeed constitute one of the main highlights of this work. Therefore, the methodology is presented as a valuable tool which provides customized PFD for each specific case, taking into account process unit interactions and the user specified requirements and objectives.

  6. A knowledge management methodology for the integrated assessment of WWTP configurations during conceptual design.

    PubMed

    Garrido-Baserba, M; Reif, R; Rodriguez-Roda, I; Poch, M

    2012-01-01

    The current complexity involved in wastewater management projects is arising as the XXI century sets new challenges leading towards a more integrated plant design. In this context, the growing number of innovative technologies, stricter legislation and the development of new methodological approaches make it difficult to design appropriate flow schemes for new wastewater projects. Thus, new tools are needed for the wastewater treatment plant (WWTP) conceptual design using integrated assessment methods in order to include different types of objectives at the same time i.e. environmental, economical, technical, and legal. Previous experiences used the decision support system (DSS) methodology to handle the specific issues related to wastewater management, for example, the design of treatment facilities for small communities. However, tools developed for addressing the whole treatment process independently of the plant size, capable of integrating knowledge from many different areas, including both conventional and innovative technologies are not available. Therefore, the aim of this paper is to present and describe an innovative knowledge-based methodology that handles the conceptual design of WWTP process flow-diagrams (PFDs), satisfying a vast number of different criteria. This global approach is based on a hierarchy of decisions that uses the information contained in knowledge bases (KBs) with the aim of automating the generation of suitable WWTP configurations for a specific scenario. Expert interviews, legislation, specialized literature and engineering experience have been integrated within the different KBs, which indeed constitute one of the main highlights of this work. Therefore, the methodology is presented as a valuable tool which provides customized PFD for each specific case, taking into account process unit interactions and the user specified requirements and objectives. PMID:22678214

  7. Impact of User-Centered Design Methodology on the Design of Information Systems.

    ERIC Educational Resources Information Center

    Sugar, William A.

    1995-01-01

    Examines the implications of incorporating user-centered design within information systems design practices. Highlights include a definition of user-centered design based on human-computer interface; questions asked about users, including outcome, process, and task variables; and three criteria for when to use this approach in information systems…

  8. Robust model matching design methodology for a stochastic synthetic gene network.

    PubMed

    Chen, Bor-Sen; Chang, Chia-Hung; Wang, Yu-Chao; Wu, Chih-Hung; Lee, Hsiao-Ching

    2011-03-01

    Synthetic biology has shown its potential and promising applications in the last decade. However, many synthetic gene networks cannot work properly and maintain their desired behaviors due to intrinsic parameter variations and extrinsic disturbances. In this study, the intrinsic parameter uncertainties and external disturbances are modeled in a non-linear stochastic gene network to mimic the real environment in the host cell. Then a non-linear stochastic robust matching design methodology is introduced to withstand the intrinsic parameter fluctuations and to attenuate the extrinsic disturbances in order to achieve a desired reference matching purpose. To avoid solving the Hamilton-Jacobi inequality (HJI) in the non-linear stochastic robust matching design, global linearization technique is used to simplify the design procedure by solving a set of linear matrix inequalities (LMIs). As a result, the proposed matching design methodology of the robust synthetic gene network can be efficiently designed with the help of LMI toolbox in Matlab. Finally, two in silico design examples of the robust synthetic gene network are given to illustrate the design procedure and to confirm the robust model matching performance to achieve the desired behavior in spite of stochastic parameter fluctuations and environmental disturbances in the host cell. PMID:21215760

  9. Prognostics and health management design for rotary machinery systems—Reviews, methodology and applications

    NASA Astrophysics Data System (ADS)

    Lee, Jay; Wu, Fangji; Zhao, Wenyu; Ghaffari, Masoud; Liao, Linxia; Siegel, David

    2014-01-01

    Much research has been conducted in prognostics and health management (PHM), an emerging field in mechanical engineering that is gaining interest from both academia and industry. Most of these efforts have been in the area of machinery PHM, resulting in the development of many algorithms for this particular application. The majority of these algorithms concentrate on applications involving common rotary machinery components, such as bearings and gears. Knowledge of this prior work is a necessity for any future research efforts to be conducted; however, there has not been a comprehensive overview that details previous and on-going efforts in PHM. In addition, a systematic method for developing and deploying a PHM system has yet to be established. Such a method would enable rapid customization and integration of PHM systems for diverse applications. To address these gaps, this paper provides a comprehensive review of the PHM field, followed by an introduction of a systematic PHM design methodology, 5S methodology, for converting data to prognostics information. This methodology includes procedures for identifying critical components, as well as tools for selecting the most appropriate algorithms for specific applications. Visualization tools are presented for displaying prognostics information in an appropriate fashion for quick and accurate decision making. Industrial case studies are included in this paper to show how this methodology can help in the design of an effective PHM system.

  10. Progress towards an Optimization Methodology for Combustion-Driven Portable Thermoelectric Power Generation Systems

    SciTech Connect

    Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.; Chase, Jordan R.; Fleurial, Jean-Pierre; Hendricks, Terry J.

    2012-03-13

    Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directly through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this

  11. Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment

    ERIC Educational Resources Information Center

    Chen, Jing

    2012-01-01

    Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…

  12. Design methodology for multi-pumped discrete Raman amplifiers: case-study employing photonic crystal fibers.

    PubMed

    Castellani, C E S; Cani, S P N; Segatto, M E; Pontes, M J; Romero, M A

    2009-08-01

    This paper proposes a new design methodology for discrete multi-pumped Raman amplifier. In a multi-objective optimization scenario, in a first step the whole solution-space is inspected by a CW analytical formulation. Then, the most promising solutions are fully investigated by a rigorous numerical treatment and the Raman amplification performance is thus determined by the combination of analytical and numerical approaches. As an application of our methodology we designed an photonic crystal fiber Raman amplifier configuration which provides low ripple, high gain, clear eye opening and a low power penalty. The amplifier configuration also enables to fully compensate the dispersion introduced by a 70-km singlemode fiber in a 10 Gbit/s system. We have successfully obtained a configuration with 8.5 dB average gain over the C-band and 0.71 dB ripple with almost zero eye-penalty using only two pump lasers with relatively low pump power.

  13. Designing and implementing INTREPID, an intensive program in translational research methodologies for new investigators.

    PubMed

    Plottel, Claudia S; Aphinyanaphongs, Yindalon; Shao, Yongzhao; Micoli, Keith J; Fang, Yixin; Goldberg, Judith D; Galeano, Claudia R; Stangel, Jessica H; Chavis-Keeling, Deborah; Hochman, Judith S; Cronstein, Bruce N; Pillinger, Michael H

    2014-12-01

    Senior housestaff and junior faculty are often expected to perform clinical research, yet may not always have the requisite knowledge and skills to do so successfully. Formal degree programs provide such knowledge, but require a significant commitment of time and money. Short-term training programs (days to weeks) provide alternative ways to accrue essential information and acquire fundamental methodological skills. Unfortunately, published information about short-term programs is sparse. To encourage discussion and exchange of ideas regarding such programs, we here share our experience developing and implementing INtensive Training in Research Statistics, Ethics, and Protocol Informatics and Design (INTREPID), a 24-day immersion training program in clinical research methodologies. Designing, planning, and offering INTREPID was feasible, and required significant faculty commitment, support personnel and infrastructure, as well as committed trainees.

  14. Spintronic logic design methodology based on spin Hall effect-driven magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Kang, Wang; Wang, Zhaohao; Zhang, Youguang; Klein, Jacques-Olivier; Lv, Weifeng; Zhao, Weisheng

    2016-02-01

    Conventional complementary metal-oxide-semiconductor (CMOS) technology is now approaching its physical scaling limits to enable Moore’s law to continue. Spintronic devices, as one of the potential alternatives, show great promise to replace CMOS technology for next-generation low-power integrated circuits in nanoscale technology nodes. Until now, spintronic memory has been successfully commercialized. However spintronic logic still faces many critical challenges (e.g. direct cascading capability and small operation gain) before it can be practically applied. In this paper, we propose a standard complimentary spintronic logic (CSL) design methodology to form a CMOS-like logic design paradigm. Using the spin Hall effect (SHE)-driven magnetic tunnel junction (MTJ) device as an example, we demonstrate CSL implementation, functionality and performance. This logic family provides a unified design methodology for spintronic logic circuits and partly solves the challenges of direct cascading capability and small operation gain in the previously proposed spintronic logic designs. By solving a modified Landau-Lifshitz-Gilbert equation, the magnetization dynamics in the free layer of the MTJ is theoretically described and a compact electrical model is developed. With this electrical model, numerical simulations have been performed to evaluate the functionality and performance of the proposed CSL design. Simulation results demonstrate that the proposed CSL design paradigm is rather promising for low-power logic computing.

  15. Methodology for CFD Design Analysis of National Launch System Nozzle Manifold

    NASA Technical Reports Server (NTRS)

    Haire, Scot L.

    1993-01-01

    The current design environment dictates that high technology CFD (Computational Fluid Dynamics) analysis produce quality results in a timely manner if it is to be integrated into the design process. The design methodology outlined describes the CFD analysis of an NLS (National Launch System) nozzle film cooling manifold. The objective of the analysis was to obtain a qualitative estimate for the flow distribution within the manifold. A complex, 3D, multiple zone, structured grid was generated from a 3D CAD file of the geometry. A Euler solution was computed with a fully implicit compressible flow solver. Post processing consisted of full 3D color graphics and mass averaged performance. The result was a qualitative CFD solution that provided the design team with relevant information concerning the flow distribution in and performance characteristics of the film cooling manifold within an effective time frame. Also, this design methodology was the foundation for a quick turnaround CFD analysis of the next iteration in the manifold design.

  16. The Atomic Intrinsic Integration Approach: A Structured Methodology for the Design of Games for the Conceptual Understanding of Physics

    ERIC Educational Resources Information Center

    Echeverria, Alejandro; Barrios, Enrique; Nussbaum, Miguel; Amestica, Matias; Leclerc, Sandra

    2012-01-01

    Computer simulations combined with games have been successfully used to teach conceptual physics. However, there is no clear methodology for guiding the design of these types of games. To remedy this, we propose a structured methodology for the design of conceptual physics games that explicitly integrates the principles of the intrinsic…

  17. Using CFD as Rocket Injector Design Tool: Recent Progress at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Tucker, Kevin; West, Jeff; Williams, Robert; Lin, Jeff; Rocker, Marvin; Canabal, Francisco; Robles, Bryan; Garcia, Robert; Chenoweth, James

    2003-01-01

    The choice of tools used for injector design is in a transitional phase between exclusive reliance on the empirically based correlations and extensive use of computational fluid dynamics (CFD). The Next Generation Launch Technology (NGLT) Program goals emphasizing lower costs and increased reliability have produced a need to enable CFD as an injector design tool in a shorter time frame. This is the primary objective of the Staged Combustor Injector Technology Task currently under way at Marshall Space Flight Center (MSFC). The documentation of this effort begins with a very brief status of current injector design tools. MSFC's vision for use of CFD as a tool for combustion devices design is stated and discussed with emphasis on the injector. The concept of the Simulation Readiness Level (SRL), comprised of solution fidelity, robustness and accuracy, is introduced and discussed. This quantitative measurement is used to establish the gap between the current state of demonstrated capability and that necessary for regular use in the design process. MSFC's view of the validation process is presented and issues associated with obtaining the necessary data are noted and discussed. Three current experimental efforts aimed at generating validation data are presented. The importance of uncertainty analysis to understand the data quality is also demonstrated. First, a brief status of current injector design tools is provided as context for the current effort. Next, the MSFC vision for using CFD as an injector design tool is stated. A generic CFD-based injector design methodology is also outlined and briefly discussed. Three areas where MSFC is using injector CFD analyses for program support will be discussed. These include the Integrated Powerhead Development (IPD) engine which uses hydrogen and oxygen propellants in a full flow staged combustion (FFSC) cycle and the TR-107 and the RS84 engine both of which use RP-1 and oxygen in an ORSC cycle. Finally, an attempt is made to

  18. Joint application of AI techniques, PRA and disturbance analysis methodology to problems in the maintenance and design of nuclear power plants

    SciTech Connect

    Okrent, D.

    1989-03-01

    This final report summarizes the accomplishments of a two year research project entitled Joint Application of Artificial Intelligence Techniques, Probabilistic Risk Analysis, and Disturbance Analysis Methodology to Problems in the Maintenance and Design of Nuclear Power Plants. The objective of this project is to develop and apply appropriate combinations of techniques from artificial intelligence, (AI), reliability and risk analysis and disturbance analysis to well-defined programmatic problems of nuclear power plants. Reactor operations issues were added to those of design and maintenance as the project progressed.

  19. Joint application of AI techniques, PRA and disturbance analysis methodology to problems in the maintenance and design of nuclear power plants. Final report

    SciTech Connect

    Okrent, D.

    1989-03-01

    This final report summarizes the accomplishments of a two year research project entitled ``Joint Application of Artificial Intelligence Techniques, Probabilistic Risk Analysis, and Disturbance Analysis Methodology to Problems in the Maintenance and Design of Nuclear Power Plants. The objective of this project is to develop and apply appropriate combinations of techniques from artificial intelligence, (AI), reliability and risk analysis and disturbance analysis to well-defined programmatic problems of nuclear power plants. Reactor operations issues were added to those of design and maintenance as the project progressed.

  20. Progression in Learning about "The Nature of Science": Issues of Conceptualisation and Methodology.

    ERIC Educational Resources Information Center

    Leach, John; And Others

    Recently, it was proposed that a curricular aim of science education should be to engender an understanding of the nature of the scientific enterprise among students, as well as a knowledge of the technical contents of science. Seven diagnostic instruments were designed and administered to students (between the ages of 9 and 16) in an effort to…

  1. Progress Towards an Optimization Methodology for Combustion-Driven Portable Thermoelectric Power Generation Systems

    NASA Astrophysics Data System (ADS)

    Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.; Chase, Jordan R.; Fleurial, Jean-Pierre; Hendricks, Terry J.

    2012-06-01

    There is enormous military and commercial interest in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. In the design and development of this portable TE power system using a JP-8 combustor as a high-temperature heat source, optimal process flows depend on efficient heat generation, transfer, and recovery within the system. The combustor performance and TE subsystem performance were coupled directly through combustor exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation and design optimization of this TE power system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed TE converter thermal/mechanical modeling. To this end, this paper reports integration of system-level process flow simulations using CHEMCAD™ commercial software with in-house TE converter and module optimization, and heat exchanger analyses using COMSOL™ software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem-level conversion efficiencies exceeding 10%. These TE advances are integrated with a high-performance microtechnology combustion reactor based on recent advances at Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation approach lead directly to system efficiency-power maps defining potentially available optimal system operating conditions and regimes. Further, it is shown that, for a given fuel flow rate, there exists a combination of recuperative effectiveness and hot-side heat exchanger effectiveness that provides a higher specific power output from

  2. Biomarker-Guided Adaptive Trial Designs in Phase II and Phase III: A Methodological Review

    PubMed Central

    Antoniou, Miranta; Jorgensen, Andrea L; Kolamunnage-Dona, Ruwanthi

    2016-01-01

    Background Personalized medicine is a growing area of research which aims to tailor the treatment given to a patient according to one or more personal characteristics. These characteristics can be demographic such as age or gender, or biological such as a genetic or other biomarker. Prior to utilizing a patient’s biomarker information in clinical practice, robust testing in terms of analytical validity, clinical validity and clinical utility is necessary. A number of clinical trial designs have been proposed for testing a biomarker’s clinical utility, including Phase II and Phase III clinical trials which aim to test the effectiveness of a biomarker-guided approach to treatment; these designs can be broadly classified into adaptive and non-adaptive. While adaptive designs allow planned modifications based on accumulating information during a trial, non-adaptive designs are typically simpler but less flexible. Methods and Findings We have undertaken a comprehensive review of biomarker-guided adaptive trial designs proposed in the past decade. We have identified eight distinct biomarker-guided adaptive designs and nine variations from 107 studies. Substantial variability has been observed in terms of how trial designs are described and particularly in the terminology used by different authors. We have graphically displayed the current biomarker-guided adaptive trial designs and summarised the characteristics of each design. Conclusions Our in-depth overview provides future researchers with clarity in definition, methodology and terminology for biomarker-guided adaptive trial designs. PMID:26910238

  3. Methodology to design a municipal solid waste generation and composition map: A case study

    SciTech Connect

    Gallardo, A. Carlos, M. Peris, M. Colomer, F.J.

    2014-11-15

    Highlights: • To draw a waste generation and composition map of a town a lot of factors must be taken into account. • The methodology proposed offers two different depending on the available data combined with geographical information systems. • The methodology has been applied to a Spanish city with success. • The methodology will be a useful tool to organize the municipal solid waste management. - Abstract: The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consist in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the

  4. Application of an integrated flight/propulsion control design methodology to a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane L.

    1991-01-01

    Results are presented from the application of an emerging Integrated Flight/Propulsion Control (IFPC) design methodology to a Short Take Off and Vertical Landing (STOVL) aircraft in transition flight. The steps in the methodology consist of designing command shaping prefilters to provide the overall desired response to pilot command inputs. A previously designed centralized controller is first validated for the integrated airframe/engine plant used. This integrated plant is derived from a different model of the engine subsystem than the one used for the centralized controller design. The centralized controller is then partitioned in a decentralized, hierarchical structure comprising of airframe lateral and longitudinal subcontrollers and an engine subcontroller. Command shaping prefilters from the pilot control effector inputs are then designed and time histories of the closed loop IFPC system response to simulated pilot commands are compared to desired responses based on handling qualities requirements. Finally, the propulsion system safety and nonlinear limited protection logic is wrapped around the engine subcontroller and the response of the closed loop integrated system is evaluated for transients that encounter the propulsion surge margin limit.

  5. A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Moffitt, Blake Almy

    Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are

  6. In Vitro Developmental Toxicology Screens: A Report on the Progress of the Methodology and Future Applications.

    PubMed

    Zhang, Cindy; Ball, Jonathan; Panzica-Kelly, Julie; Augustine-Rauch, Karen

    2016-04-18

    There has been increasing focus on generation and assessment of in vitro developmental toxicology models for assessing teratogenic liability of chemicals. The driver for this focus has been to find reliable in vitro assays that will reduce or replace the use of in vivo tests for assessing teratogenicity. Such efforts may be eventually applied in testing pharmaceutical agents where a developmental toxicology assay or battery of assays may be incorporated into regulatory testing to replace one of the two species currently used in teratogenic assessment. Such assays may be eventually applied in testing a broader spectrum of chemicals, supporting efforts aligned with Tox21 strategies and responding to REACH legislation. This review describes the developmental toxicology assays that are of focus in these assessments: rodent whole embryo culture, zebrafish embryo assays, and embryonic stem cell assays. Progress on assay development as well as future directions of how these assays are envisioned to be applied for broader safety testing of chemicals are discussed. Altogether, the developmental model systems described in this review provide rich biological systems that can be utilized in better understanding teratogenic mechanisms of action of chemotypes and are promising in providing proactive safety assessment related to developmental toxicity. Continual advancements in refining/optimizing these in vitro assays are anticipated to provide a robust data set to provide thoughtful assessment of how whole animal teratogenicity evaluations can be reduced/refined in the future. PMID:26766213

  7. In Vitro Developmental Toxicology Screens: A Report on the Progress of the Methodology and Future Applications.

    PubMed

    Zhang, Cindy; Ball, Jonathan; Panzica-Kelly, Julie; Augustine-Rauch, Karen

    2016-04-18

    There has been increasing focus on generation and assessment of in vitro developmental toxicology models for assessing teratogenic liability of chemicals. The driver for this focus has been to find reliable in vitro assays that will reduce or replace the use of in vivo tests for assessing teratogenicity. Such efforts may be eventually applied in testing pharmaceutical agents where a developmental toxicology assay or battery of assays may be incorporated into regulatory testing to replace one of the two species currently used in teratogenic assessment. Such assays may be eventually applied in testing a broader spectrum of chemicals, supporting efforts aligned with Tox21 strategies and responding to REACH legislation. This review describes the developmental toxicology assays that are of focus in these assessments: rodent whole embryo culture, zebrafish embryo assays, and embryonic stem cell assays. Progress on assay development as well as future directions of how these assays are envisioned to be applied for broader safety testing of chemicals are discussed. Altogether, the developmental model systems described in this review provide rich biological systems that can be utilized in better understanding teratogenic mechanisms of action of chemotypes and are promising in providing proactive safety assessment related to developmental toxicity. Continual advancements in refining/optimizing these in vitro assays are anticipated to provide a robust data set to provide thoughtful assessment of how whole animal teratogenicity evaluations can be reduced/refined in the future.

  8. Methodology for worker neutron exposure evaluation in the PDCF facility design.

    PubMed

    Scherpelz, R I; Traub, R J; Pryor, K H

    2004-01-01

    A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons

  9. METHODOLOGY FOR WORKER NEUTRON EXPOSURE EVALUATION IN THE PDCF FACILITY DESIGN

    SciTech Connect

    Scherpelz, Robert I.; Traub, Richard J.; Pryor, Kathryn H.

    2004-08-01

    A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y?1 for the whole body and 100 mSv y?1 for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modeling the reinforcing steel in concrete, and modularizing the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were

  10. Methodology for worker neutron exposure evaluation in the PDCF facility design.

    PubMed

    Scherpelz, R I; Traub, R J; Pryor, K H

    2004-01-01

    A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons

  11. Spacecraft Design-for-Demise implementation strategy & decision-making methodology for low earth orbit missions

    NASA Astrophysics Data System (ADS)

    Waswa, Peter M. B.; Elliot, Michael; Hoffman, Jeffrey A.

    2013-05-01

    Space missions designed to completely ablate upon an uncontrolled Earth atmosphere reentry are likely to be simpler and cheaper than those designed to execute controlled reentry. This is because mission risk (unavailability) stemming from controlled reentry subsystem failure(s) is essentially eliminated. NASA has not customarily implemented Design-for-Demise meticulously. NASA has rather approached Design-for-Demise in an ad hoc manner that fails to entrench Design-for-Demise as a mission design driver. Thus, enormous demisability challenges at later formulation stages of missions aspired to be demisable are evident due to these perpetuated oversights in entrenching Design-for-Demise practices. The investigators hence propose a strategy for a consistent integration of Design-for-Demise practices in all phases of a space mission lifecycle. Secondly, an all-inclusive risk-informed, decision-making methodology referred to as Analytic Deliberative Process is proposed. This criterion facilitates in making a choice between an uncontrolled reentry demisable or controlled reentry. The authors finally conceive and synthesize Objectives Hierarchy, Attributes, and Quantitative Performance Measures of the Analytical Deliberative Process for a Design-for-Demise risk-informed decision-making process.

  12. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  13. Device Thrombogenicty Emulator (DTE) – Design optimization Methodology for Cardiovascular Devices: A Study in Two Bileaflet MHV Designs

    PubMed Central

    Xenos, Michalis; Girdhar, Gaurav; Alemu, Yared; Jesty, Jolyon; Slepian, Marvin; Einav, Shmuel; Bluestein, Danny

    2010-01-01

    Patients who receive prosthetic heart valve (PHV) implants require mandatory anticoagulation medication after implantation due to the thrombogenic potential of the valve. Optimization of PHV designs may facilitate reduction of flow-induced thrombogenicity and reduce or eliminate the need for post-implant anticoagulants. We present a methodology entitled Device Thrombogenicty Emulator (DTE) for optimizing the thrombo-resistance performance of PHV by combining numerical and experimental approaches. Two bileaflet mechanical heart valves (MHV) designs – St. Jude Medical (SJM) and ATS were investigated, by studying the effect of distinct flow phases on platelet activation. Transient turbulent and direct numerical simulations (DNS) were conducted, and stress loading histories experienced by the platelets were calculated along flow trajectories. The numerical simulations indicated distinct design dependent differences between the two valves. The stress-loading waveforms extracted from the numerical simulations were programmed into a hemodynamic shearing device (HSD), emulating the flow conditions past the valves in distinct ‘hot spot’ flow regions that are implicated in MHV thrombogenicity. The resultant platelet activity was measured with a modified prothrombinase assay, and was found to be significantly higher in the SJM valve, mostly during the regurgitation phase. The experimental results were in excellent agreement with the calculated platelet activation potential. This establishes the utility of the DTE methodology for serving as a test bed for evaluating design modifications for achieving better thrombogenic performance for such devices. PMID:20483411

  14. Low-Radiation Cellular Inductive Powering of Rodent Wireless Brain Interfaces: Methodology and Design Guide.

    PubMed

    Soltani, Nima; Aliroteh, Miaad S; Salam, M Tariqus; Perez Velazquez, Jose Luis; Genov, Roman

    2016-08-01

    This paper presents a general methodology of inductive power delivery in wireless chronic rodent electrophysiology applications. The focus is on such systems design considerations under the following key constraints: maximum power delivery under the allowable specific absorption rate (SAR), low cost and spatial scalability. The methodology includes inductive coil design considerations within a low-frequency ferrite-core-free power transfer link which includes a scalable coil-array power transmitter floor and a single-coil implanted or worn power receiver. A specific design example is presented that includes the concept of low-SAR cellular single-transmitter-coil powering through dynamic tracking of a magnet-less receiver spatial location. The transmitter coil instantaneous supply current is monitored using a small number of low-cost electronic components. A drop in its value indicates the proximity of the receiver due to the reflected impedance of the latter. Only the transmitter coil nearest to the receiver is activated. Operating at the low frequency of 1.5 MHz, the inductive powering floor delivers a maximum of 15.9 W below the IEEE C95 SAR limit, which is over three times greater than that in other recently reported designs. The power transfer efficiency of 39% and 13% at the nominal and maximum distances of 8 cm and 11 cm, respectively, is maintained. PMID:26960227

  15. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.

    PubMed

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-08-24

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.

  16. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.

    PubMed

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-01-01

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908

  17. Low-Radiation Cellular Inductive Powering of Rodent Wireless Brain Interfaces: Methodology and Design Guide.

    PubMed

    Soltani, Nima; Aliroteh, Miaad S; Salam, M Tariqus; Perez Velazquez, Jose Luis; Genov, Roman

    2016-08-01

    This paper presents a general methodology of inductive power delivery in wireless chronic rodent electrophysiology applications. The focus is on such systems design considerations under the following key constraints: maximum power delivery under the allowable specific absorption rate (SAR), low cost and spatial scalability. The methodology includes inductive coil design considerations within a low-frequency ferrite-core-free power transfer link which includes a scalable coil-array power transmitter floor and a single-coil implanted or worn power receiver. A specific design example is presented that includes the concept of low-SAR cellular single-transmitter-coil powering through dynamic tracking of a magnet-less receiver spatial location. The transmitter coil instantaneous supply current is monitored using a small number of low-cost electronic components. A drop in its value indicates the proximity of the receiver due to the reflected impedance of the latter. Only the transmitter coil nearest to the receiver is activated. Operating at the low frequency of 1.5 MHz, the inductive powering floor delivers a maximum of 15.9 W below the IEEE C95 SAR limit, which is over three times greater than that in other recently reported designs. The power transfer efficiency of 39% and 13% at the nominal and maximum distances of 8 cm and 11 cm, respectively, is maintained.

  18. Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors

    PubMed Central

    Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter

    2016-01-01

    Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908

  19. Study design, methodology and statistical analyses in the clinical development of sparfloxacin.

    PubMed

    Genevois, E; Lelouer, V; Vercken, J B; Caillon, R

    1996-05-01

    Many publications in the past 10 years have emphasised the difficulties of evaluating anti-infective drugs and the need for well-designed clinical trials in this therapeutic field. The clinical development of sparfloxacin in Europe, involving more than 4000 patients in ten countries, provided the opportunity to implement a methodology for evaluation and statistical analyses which would take into account actual requirements and past insufficiencies. This methodology focused on a rigorous and accurate patient classification for evaluability, subgroups of particular interest, efficacy assessment based on automation (algorithm) and individual case review by expert panel committees. In addition, the statistical analyses did not use significance testing but rather confidence intervals to determine whether sparfloxacin was therapeutically equivalent to the reference comparator antibacterial agents. PMID:8737126

  20. Study design, methodology and statistical analyses in the clinical development of sparfloxacin.

    PubMed

    Genevois, E; Lelouer, V; Vercken, J B; Caillon, R

    1996-05-01

    Many publications in the past 10 years have emphasised the difficulties of evaluating anti-infective drugs and the need for well-designed clinical trials in this therapeutic field. The clinical development of sparfloxacin in Europe, involving more than 4000 patients in ten countries, provided the opportunity to implement a methodology for evaluation and statistical analyses which would take into account actual requirements and past insufficiencies. This methodology focused on a rigorous and accurate patient classification for evaluability, subgroups of particular interest, efficacy assessment based on automation (algorithm) and individual case review by expert panel committees. In addition, the statistical analyses did not use significance testing but rather confidence intervals to determine whether sparfloxacin was therapeutically equivalent to the reference comparator antibacterial agents.

  1. Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors

    SciTech Connect

    Fahim Farah, Fahim Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman

    2015-08-28

    The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array without any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.

  2. Methodology to design a municipal solid waste generation and composition map: a case study.

    PubMed

    Gallardo, A; Carlos, M; Peris, M; Colomer, F J

    2015-02-01

    The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consists in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has been successfully applied to a Spanish town. PMID:25443095

  3. Methodology to design a municipal solid waste generation and composition map: a case study.

    PubMed

    Gallardo, A; Carlos, M; Peris, M; Colomer, F J

    2014-11-01

    The municipal solid waste (MSW) management is an important task that local governments as well as private companies must take into account to protect human health, the environment and to preserve natural resources. To design an adequate MSW management plan the first step consist in defining the waste generation and composition patterns of the town. As these patterns depend on several socio-economic factors it is advisable to organize them previously. Moreover, the waste generation and composition patterns may vary around the town and over the time. Generally, the data are not homogeneous around the city as the number of inhabitants is not constant nor it is the economic activity. Therefore, if all the information is showed in thematic maps, the final waste management decisions can be made more efficiently. The main aim of this paper is to present a structured methodology that allows local authorities or private companies who deal with MSW to design its own MSW management plan depending on the available data. According to these data, this paper proposes two ways of action: a direct way when detailed data are available and an indirect way when there is a lack of data and it is necessary to take into account bibliographic data. In any case, the amount of information needed is considerable. This paper combines the planning methodology with the Geographic Information Systems to present the final results in thematic maps that make easier to interpret them. The proposed methodology is a previous useful tool to organize the MSW collection routes including the selective collection. To verify the methodology it has been successfully applied to a Spanish town. PMID:25008298

  4. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 2; Methodology Application Software Toolbox

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.

  5. Preliminary Design and Evaluation of Portable Electronic Flight Progress Strips

    NASA Technical Reports Server (NTRS)

    Doble, Nathan A.; Hansman, R. John

    2002-01-01

    There has been growing interest in using electronic alternatives to the paper Flight Progress Strip (FPS) for air traffic control. However, most research has been centered on radar-based control environments, and has not considered the unique operational needs of the airport air traffic control tower. Based on an analysis of the human factors issues for control tower Decision Support Tool (DST) interfaces, a requirement has been identified for an interaction mechanism which replicates the advantages of the paper FPS (e.g., head-up operation, portability) but also enables input and output with DSTs. An approach has been developed which uses a Portable Electronic FPS that has attributes of both a paper strip and an electronic strip. The prototype flight strip system uses Personal Digital Assistants (PDAs) to replace individual paper strips in addition to a central management interface which is displayed on a desktop computer. Each PDA is connected to the management interface via a wireless local area network. The Portable Electronic FPSs replicate the core functionality of paper flight strips and have additional features which provide a heads-up interface to a DST. A departure DST is used as a motivating example. The central management interface is used for aircraft scheduling and sequencing and provides an overview of airport departure operations. This paper will present the design of the Portable Electronic FPS system as well as preliminary evaluation results.

  6. Utilization of research methodology in designing and developing an interdisciplinary course in ethics.

    PubMed

    Stone, Jennie Ann M; Haas, Barbara A; Harmer-Beem, Marji J; Baker, David L

    2004-02-01

    Development research methodology was utilized to design an interdisciplinary ethics course for students from seven disciplines: dental hygiene, nursing, nurse anesthesia, occupational therapy, physician assistant, physical therapy, and social work. Two research questions, 'What content areas should be considered for inclusion in an interdisciplinary course in Ethics?' and 'What design framework, format, or structure would best fit the content chosen?' guided the study. An interdisciplinary faculty design team conducted a comparative analysis of each of the seven discipline's codes of ethics to find common topics of interest. Further analysis then grouped these topics into eight categories of professional responsibility. The result was a fifteen-week course with validated content relevant to all disciplines.

  7. Inductive Powering of Subcutaneous Stimulators: Key Parameters and Their Impact on the Design Methodology

    PubMed Central

    Godfraind, Carmen; Debelle, Adrien; Lonys, Laurent; Acuña, Vicente; Doguet, Pascal; Nonclercq, Antoine

    2016-01-01

    Inductive powering of implantable medical devices involves numerous factors acting on the system efficiency and safety in adversarial ways. This paper lightens up their role and identifies a procedure enabling the system design. The latter enables the problem to be decoupled into four principal steps: the frequency choice, the magnetic link optimization, the secondary circuit and then finally the primary circuit designs. The methodology has been tested for the powering system of a device requirering a power of 300mW and implanted at a distance of 15 to 30mm from the outside power source. It allowed the identification of the most critical parameters. A satisfying efficiency of 34% was reached at 21mm and tend to validate the proposed design procedure. PMID:27478572

  8. Inductive Powering of Subcutaneous Stimulators: Key Parameters and Their Impact on the Design Methodology.

    PubMed

    Godfraind, Carmen; Debelle, Adrien; Lonys, Laurent; Acuña, Vicente; Doguet, Pascal; Nonclercq, Antoine

    2016-06-13

    Inductive powering of implantable medical devices involves numerous factors acting on the system efficiency and safety in adversarial ways. This paper lightens up their role and identifies a procedure enabling the system design. The latter enables the problem to be decoupled into four principal steps: the frequency choice, the magnetic link optimization, the secondary circuit and then finally the primary circuit designs. The methodology has been tested for the powering system of a device requirering a power of 300mW and implanted at a distance of 15 to 30mm from the outside power source. It allowed the identification of the most critical parameters. A satisfying efficiency of 34% was reached at 21mm and tend to validate the proposed design procedure. PMID:27478572

  9. Design Methodology: ASICs with complex in-pixel processing for Pixel Detectors

    SciTech Connect

    Fahim, Farah

    2014-10-31

    The development of Application Specific Integrated Circuits (ASIC) for pixel detectors with complex in-pixel processing using Computer Aided Design (CAD) tools that are, themselves, mainly developed for the design of conventional digital circuits requires a specialized approach. Mixed signal pixels often require parasitically aware detailed analog front-ends and extremely compact digital back-ends with more than 1000 transistors in small areas below 100μm x 100μm. These pixels are tiled to create large arrays, which have the same clock distribution and data readout speed constraints as in, for example, micro-processors. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout.

  10. A Digital Methodology for the Design Process of Aerospace Assemblies with Sustainable Composite Processes & Manufacture

    NASA Astrophysics Data System (ADS)

    McEwan, W.; Butterfield, J.

    2011-05-01

    The well established benefits of composite materials are driving a significant shift in design and manufacture strategies for original equipment manufacturers (OEMs). Thermoplastic composites have advantages over the traditional thermosetting materials with regards to sustainability and environmental impact, features which are becoming increasingly pertinent in the aerospace arena. However, when sustainability and environmental impact are considered as design drivers, integrated methods for part design and product development must be developed so that any benefits of sustainable composite material systems can be assessed during the design process. These methods must include mechanisms to account for process induced part variation and techniques related to re-forming, recycling and decommissioning, which are in their infancy. It is proposed in this paper that predictive techniques related to material specification, part processing and product cost of thermoplastic composite components, be integrated within a Through Life Management (TLM) product development methodology as part of a larger strategy of product system modeling to improve disciplinary concurrency, realistic part performance, and to place sustainability at the heart of the design process. This paper reports the enhancement of digital manufacturing tools as a means of drawing simulated part manufacturing scenarios, real time costing mechanisms, and broader lifecycle performance data capture into the design cycle. The work demonstrates predictive processes for sustainable composite product manufacture and how a Product-Process-Resource (PPR) structure can be customised and enhanced to include design intent driven by `Real' part geometry and consequent assembly. your paper.

  11. A system-of-systems modeling methodology for strategic general aviation design decision-making

    NASA Astrophysics Data System (ADS)

    Won, Henry Thome

    General aviation has long been studied as a means of providing an on-demand "personal air vehicle" that bypasses the traffic at major commercial hubs. This thesis continues this research through development of a system of systems modeling methodology applicable to the selection of synergistic product concepts, market segments, and business models. From the perspective of the conceptual design engineer, the design and selection of future general aviation aircraft is complicated by the definition of constraints and requirements, and the tradeoffs among performance and cost aspects. Qualitative problem definition methods have been utilized, although their accuracy in determining specific requirement and metric values is uncertain. In industry, customers are surveyed, and business plans are created through a lengthy, iterative process. In recent years, techniques have developed for predicting the characteristics of US travel demand based on travel mode attributes, such as door-to-door time and ticket price. As of yet, these models treat the contributing systems---aircraft manufacturers and service providers---as independently variable assumptions. In this research, a methodology is developed which seeks to build a strategic design decision making environment through the construction of a system of systems model. The demonstrated implementation brings together models of the aircraft and manufacturer, the service provider, and most importantly the travel demand. Thus represented is the behavior of the consumers and the reactive behavior of the suppliers---the manufacturers and transportation service providers---in a common modeling framework. The results indicate an ability to guide the design process---specifically the selection of design requirements---through the optimization of "capability" metrics. Additionally, results indicate the ability to find synergetic solutions, that is solutions in which two systems might collaborate to achieve a better result than acting

  12. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Niiya, Karen E.; Walker, Richard E.; Pieper, Jerry L.; Nguyen, Thong V.

    1993-01-01

    This final report includes a discussion of the work accomplished during the period from Dec. 1988 through Nov. 1991. The objective of the program was to assemble existing performance and combustion stability models into a usable design methodology capable of designing and analyzing high-performance and stable LOX/hydrocarbon booster engines. The methodology was then used to design a validation engine. The capabilities and validity of the methodology were demonstrated using this engine in an extensive hot fire test program. The engine used LOX/RP-1 propellants and was tested over a range of mixture ratios, chamber pressures, and acoustic damping device configurations. This volume contains time domain and frequency domain stability plots which indicate the pressure perturbation amplitudes and frequencies from approximately 30 tests of a 50K thrust rocket engine using LOX/RP-1 propellants over a range of chamber pressures from 240 to 1750 psia with mixture ratios of from 1.2 to 7.5. The data is from test configurations which used both bitune and monotune acoustic cavities and from tests with no acoustic cavities. The engine had a length of 14 inches and a contraction ratio of 2.0 using a 7.68 inch diameter injector. The data was taken from both stable and unstable tests. All combustion instabilities were spontaneous in the first tangential mode. Although stability bombs were used and generated overpressures of approximately 20 percent, no tests were driven unstable by the bombs. The stability instrumentation included six high-frequency Kistler transducers in the combustion chamber, a high-frequency Kistler transducer in each propellant manifold, and tri-axial accelerometers. Performance data is presented, both characteristic velocity efficiencies and energy release efficiencies, for those tests of sufficient duration to record steady state values.

  13. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 2: Appendices

    NASA Astrophysics Data System (ADS)

    Niiya, Karen E.; Walker, Richard E.; Pieper, Jerry L.; Nguyen, Thong V.

    1993-05-01

    This final report includes a discussion of the work accomplished during the period from Dec. 1988 through Nov. 1991. The objective of the program was to assemble existing performance and combustion stability models into a usable design methodology capable of designing and analyzing high-performance and stable LOX/hydrocarbon booster engines. The methodology was then used to design a validation engine. The capabilities and validity of the methodology were demonstrated using this engine in an extensive hot fire test program. The engine used LOX/RP-1 propellants and was tested over a range of mixture ratios, chamber pressures, and acoustic damping device configurations. This volume contains time domain and frequency domain stability plots which indicate the pressure perturbation amplitudes and frequencies from approximately 30 tests of a 50K thrust rocket engine using LOX/RP-1 propellants over a range of chamber pressures from 240 to 1750 psia with mixture ratios of from 1.2 to 7.5. The data is from test configurations which used both bitune and monotune acoustic cavities and from tests with no acoustic cavities. The engine had a length of 14 inches and a contraction ratio of 2.0 using a 7.68 inch diameter injector. The data was taken from both stable and unstable tests. All combustion instabilities were spontaneous in the first tangential mode. Although stability bombs were used and generated overpressures of approximately 20 percent, no tests were driven unstable by the bombs. The stability instrumentation included six high-frequency Kistler transducers in the combustion chamber, a high-frequency Kistler transducer in each propellant manifold, and tri-axial accelerometers. Performance data is presented, both characteristic velocity efficiencies and energy release efficiencies, for those tests of sufficient duration to record steady state values.

  14. New Methodology of Designing Inexpensive Hybrid Control-Acquisition Systems for Mechatronic Constructions

    PubMed Central

    Augustyn, Jacek

    2013-01-01

    This article presents a new methodology for designing a hybrid control and acquisition system consisting of a 32-bit SoC microsystem connected via a direct Universal Serial Bus (USB) with a standard commercial off-the-shelf (COTS) component running the Android operating system. It is proposed to utilize it avoiding the use of an additional converter. An Android-based component was chosen to explore the potential for a mobile, compact and energy efficient solution with easy to build user interfaces and easy wireless integration with other computer systems. This paper presents results of practical implementation and analysis of experimental real-time performance. It covers closed control loop time between the sensor/actuator module and the Android operating system as well as the real-time sensor data stream within such a system. Some optimisations are proposed and their influence on real-time performance was investigated. The proposed methodology is intended for acquisition and control of mechatronic systems, especially mobile robots. It can be used in a wide range of control applications as well as embedded acquisition-recording devices, including energy quality measurements, smart-grids and medicine. It is demonstrated that the proposed methodology can be employed without developing specific device drivers. The latency achieved was less than 0.5 ms and the sensor data stream throughput was on the order of 750 KB/s (compared to 3 ms latency and 300 KB/s in traditional solutions). PMID:24351633

  15. The scheme machine: A case study in progress in design derivation at system levels

    NASA Technical Reports Server (NTRS)

    Johnson, Steven D.

    1995-01-01

    The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection

  16. Shape slack: a design-manufacturing co-optimization methodology using tolerance information

    NASA Astrophysics Data System (ADS)

    Banerjee, Shayak; Agarwal, Kanak B.; Nassif, Sani; Orshansky, Michael

    2013-01-01

    The move to low-k1 lithography makes it increasingly difficult to print feature sizes that are a small fraction of the wavelength of light. With further delay in the delivery of extreme ultraviolet lithography, these difficulties will motivate the research community to explore increasingly broad solutions. We propose that there is significant research potential in studying the essential premise of the design/manufacturing handoff paradigm. Today this premise revolves around design rules that define what implementations are legal, and raw shapes, which define design intent, and are treated as a fixed requirement for lithography. In reality, layout features may vary within certain tolerances without violating any design constraints. The knowledge of such tolerances can help improve the manufacturability of layout features while still meeting design requirements. We propose a methodology to convert electrical slack in a design to shape slack or tolerances on individual layout shapes. We show how this can be done for two important implementation fabrics: (a) cell-library-based digital logic and (b) static random access memory. We further develop a tolerance-driven optical proximity correction algorithm that utilizes this shape slack information during mask preparation to ensure that all features prints within their shape slacks in presence of lithographic process variations. Experiments on 45 nm silicon on insulator cells using accurate process models show that this approach reduces postlithography delay errors by 50%, and layout hotspots by 47% compared to conventional methods.

  17. Deformable Surface Accommodating Intraocular Lens: Second Generation Prototype Design Methodology and Testing

    PubMed Central

    McCafferty, Sean J.; Schwiegerling, Jim T.

    2015-01-01

    Purpose: Present an analysis methodology for developing and evaluating accommodating intraocular lenses incorporating a deformable interface. Methods: The next generation design of extruded gel interface intraocular lens is presented. A prototype based upon similar previously in vivo proven design was tested with measurements of actuation force, lens power, interface contour, optical transfer function, and visual Strehl ratio. Prototype verified mathematical models were used to optimize optical and mechanical design parameters to maximize the image quality and minimize the required force to accommodate. Results: The prototype lens produced adequate image quality with the available physiologic accommodating force. The iterative mathematical modeling based upon the prototype yielded maximized optical and mechanical performance through maximum allowable gel thickness to extrusion diameter ratio, maximum feasible refractive index change at the interface, and minimum gel material properties in Poisson's ratio and Young's modulus. Conclusions: The design prototype performed well. It operated within the physiologic constraints of the human eye including the force available for full accommodative amplitude using the eye's natural focusing feedback, while maintaining image quality in the space available. The parameters that optimized optical and mechanical performance were delineated as those, which minimize both asphericity and actuation pressure. The design parameters outlined herein can be used as a template to maximize the performance of a deformable interface intraocular lens. Translational Relevance: The article combines a multidisciplinary basic science approach from biomechanics, optical science, and ophthalmology to optimize an intraocular lens design suitable for preliminary animal trials. PMID:25938005

  18. Proposal of a methodology for the design of offshore wind farms

    NASA Astrophysics Data System (ADS)

    Esteban, Dolores; Diez, J. Javier; Santos Lopez, J.; Negro, Vicente

    2010-05-01

    In fact, the wind power installed in the sea is still very scarce, with only 1,500 megawatts in operation in the middle of 2009. Although the first offshore wind farm experiment took place in 1990, the facilities built up to now have been mainly pilot projects. These previous statements confirm the incipient state of offshore wind power, Anyway, in this moment this technology is being strongly pushed, especially by the governments of some countries - like the United Kingdom, Germany, etc. - which is due above all to the general commitments made to reduce the emission of greenhouses gases. All of these factors lead to predict a promising future for offshore wind power. Nevertheless, it has not been still established a general methodology for the design and the management of this kind of installations. This paper includes some of the results of a research project, which consists on the elaboration of a methodology to enable the optimization of the global process of the operations leading to the implantation of offshore wind facilities. The proposed methodology allows the planning of offshore wind projects according to an integral management policy, enabling not only technical and financial feasibility of the offshore wind project to be achieved, but also respect for the environment. For that, it has been necessary to take into account multiple factors, including the territory, the terrain, the physical-chemical properties of the contact area between the atmosphere and the ocean, the dynamics resulting in both as a consequence of the Earth's behaviour as a heat machine, external geodynamics, internal geodynamics, planetary dynamics, biokenosis, the legislative and financial framework, human activities, wind turbines, met masts, electric substations and lines, foundations, logistics and the project's financial profitability. For its validation, this methodology has been applied to different offshore wind farms in operation.

  19. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 3

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    Structural failure is rarely a "sudden death" type of event, such sudden failures may occur only under abnormal loadings like bomb or gas explosions and very strong earthquakes. In most cases, structures fail due to damage accumulated under normal loadings such as wind loads, dead and live loads. The consequence of cumulative damage will affect the reliability of surviving components and finally causes collapse of the system. The cumulative damage effects on system reliability under time-invariant loadings are of practical interest in structural design and therefore will be investigated in this study. The scope of this study is, however, restricted to the consideration of damage accumulation as the increase in the number of failed components due to the violation of their strength limits.

  20. P-band Radar Retrieval of Root-Zone Soil Moisture: AirMOSS Methodology, Progress, and Improvements

    NASA Astrophysics Data System (ADS)

    Moghaddam, M.; Tabatabaeenejad, A.; Chen, R.

    2015-12-01

    The AirMOSS mission seeks to improve the estimates of the North American Net Ecosystem Exchange (NEE)by providing high-resolution observations of the root zone soil moisture (RZSM) over regions representative of themajor North American biomes. The radar snapshots are used to generate estimates of RZSM. To retrieve RZSM, weuse a discrete scattering model integrated with layered-soil scattering models. The soil moisture profile is representedas a quadratic function in the form of az2 + bz + c, where z is the depth and a, b, and c are the coefficients to beretrieved. The ancillary data necessary to characterize a pixel are available from various databases. We applythe retrieval method to the radar data acquired over AirMOSS sites including Canada's BERMS, Walnut Gulchin Arizona, MOISST in Oklahoma, Tonzi Ranch in California, and Metolius in Oregon, USA. The estimated soilmoisture profile is validated against in-situ soil moisture measurements. We have continued to improve the accuracyof retrievals as the delivery of the RZSMproducts has progressed since 2012. For example, the 'threshold depth' (thedepth up to which the retrieval is mathematically valid) has been reduced from 100 cm to 50 cm after the retrievalaccuracy was assessed both mathematically and physically. Moreover, we progressively change the implementationof the inversion code and its subroutines as we find more accurate and efficient ways of mathematical operations. Thelatest AirMOSS results (including soil moisture maps, validation plots, and scatter plots) as well as all improvementsapplied to the retrieval algorithm, including the one mentioned above, will be reported at the talk, following a briefdescription of the retrieval methodology. Fig. 1 shows a validation plot for a flight over Tonzi Ranch from September2014 (a) and a scatter plot for various threshold depths using 2012 and 2013 data.

  1. Design, Progressive Modeling, Manufacture, and Testing of Composite Shield for Turbine Engine Blade Containment

    NASA Technical Reports Server (NTRS)

    Binienda, Wieslaw K.; Sancaktar, Erol; Roberts, Gary D. (Technical Monitor)

    2002-01-01

    An effective design methodology was established for composite jet engine containment structures. The methodology included the development of the full and reduced size prototypes, and FEA models of the containment structure, experimental and numerical examination of the modes of failure clue to turbine blade out event, identification of materials and design candidates for future industrial applications, and design and building of prototypes for testing and evaluation purposes.

  2. Human factors analysis and design methods for nuclear waste retrieval systems. Human factors design methodology and integration plan

    SciTech Connect

    Casey, S.M.

    1980-06-01

    The purpose of this document is to provide an overview of the recommended activities and methods to be employed by a team of human factors engineers during the development of a nuclear waste retrieval system. This system, as it is presently conceptualized, is intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository. This document, and the others in this series, have been developed for the purpose of implementing human factors engineering principles during the design and construction of the retrieval system facilities and equipment. The methodology presented has been structured around a basic systems development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Within each of these phases, the recommended activities of the human engineering team have been stated, along with descriptions of the human factors engineering design techniques applicable to the specific design issues. Explicit examples of how the techniques might be used in the analysis of human tasks and equipment required in the removal of spent fuel canisters have been provided. Only those techniques having possible relevance to the design of the waste retrieval system have been reviewed. This document is intended to provide the framework for integrating human engineering with the rest of the system development effort. The activities and methodologies reviewed in this document have been discussed in the general order in which they will occur, although the time frame (the total duration of the development program in years and months) in which they should be performed has not been discussed.

  3. Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2013-01-01

    This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.

  4. A hybrid design methodology for structuring an Integrated Environmental Management System (IEMS) for shipping business.

    PubMed

    Celik, Metin

    2009-03-01

    The International Safety Management (ISM) Code defines a broad framework for the safe management and operation of merchant ships, maintaining high standards of safety and environmental protection. On the other hand, ISO 14001:2004 provides a generic, worldwide environmental management standard that has been utilized by several industries. Both the ISM Code and ISO 14001:2004 have the practical goal of establishing a sustainable Integrated Environmental Management System (IEMS) for shipping businesses. This paper presents a hybrid design methodology that shows how requirements from both standards can be combined into a single execution scheme. Specifically, the Analytic Hierarchy Process (AHP) and Fuzzy Axiomatic Design (FAD) are used to structure an IEMS for ship management companies. This research provides decision aid to maritime executives in order to enhance the environmental performance in the shipping industry. PMID:19038488

  5. Application of Adjoint Methodology in Various Aspects of Sonic Boom Design

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2014-01-01

    One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.

  6. Assessment of current structural design methodology for high-temperature reactors based on failure tests

    SciTech Connect

    Corum, J.M.; Sartory, W.K.

    1985-01-01

    A mature design methodology, consisting of inelastic analysis methods, provided in Department of Energy guidelines, and failure criteria, contained in ASME Code Case N-47, exists in the United States for high-temperature reactor components. The objective of this paper is to assess the adequacy of this overall methodology by comparing predicted inelastic deformations and lifetimes with observed results from structural failure tests and from an actual service failure. Comparisons are presented for three types of structural situations: (1) nozzle-to-spherical shell specimens, where stresses at structural discontinuities lead to cracking, (2) welded structures, where metallurgical discontinuities play a key role in failures, and (3) thermal shock loadings of cylinders and pipes, where thermal discontinuities can lead to failure. The comparison between predicted and measured inelastic responses are generally reasonalbly good; quantities are sometimes overpredicted somewhat, and, sometimes underpredicted. However, even seemingly small discrepancies can have a significant effect on structural life, and lifetimes are not always as closely predicted. For a few cases, the lifetimes are substantially overpredicted, which raises questions regarding the adequacy of existing design margins.

  7. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  8. Integrated active and passive control design methodology for the LaRC CSI evolutionary model

    NASA Technical Reports Server (NTRS)

    Voth, Christopher T.; Richards, Kenneth E., Jr.; Schmitz, Eric; Gehling, Russel N.; Morgenthaler, Daniel R.

    1994-01-01

    A general design methodology to integrate active control with passive damping was demonstrated on the NASA LaRC CSI Evolutionary Model (CEM), a ground testbed for future large, flexible spacecraft. Vibration suppression controllers designed for Line-of Sight (LOS) minimization were successfully implemented on the CEM. A frequency-shaped H2 methodology was developed, allowing the designer to specify the roll-off of the MIMO compensator. A closed loop bandwidth of 4 Hz, including the six rigid body modes and the first three dominant elastic modes of the CEM was achieved. Good agreement was demonstrated between experimental data and analytical predictions for the closed loop frequency response and random tests. Using the Modal Strain Energy (MSE) method, a passive damping treatment consisting of 60 viscoelastically damped struts was designed, fabricated and implemented on the CEM. Damping levels for the targeted modes were more than an order of magnitude larger than for the undamped structure. Using measured loss and stiffness data for the individual damped struts, analytical predictions of the damping levels were very close to the experimental values in the (1-10) Hz frequency range where the open loop model matched the experimental data. An integrated active/passive controller was successfully implemented on the CEM and was evaluated against an active-only controller. A two-fold increase in the effective control bandwidth and further reductions of 30 percent to 50 percent in the LOS RMS outputs were achieved compared to an active-only controller. Superior performance was also obtained compared to a High-Authority/Low-Authority (HAC/LAC) controller.

  9. Modeling and Design Analysis Methodology for Tailoring of Aircraft Structures with Composites

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.

    2004-01-01

    Composite materials provide design flexibility in that fiber placement and orientation can be specified and a variety of material forms and manufacturing processes are available. It is possible, therefore, to 'tailor' the structure to a high degree in order to meet specific design requirements in an optimum manner. Common industrial practices, however, have limited the choices designers make. One of the reasons for this is that there is a dearth of conceptual/preliminary design analysis tools specifically devoted to identifying structural concepts for composite airframe structures. Large scale finite element simulations are not suitable for such purposes. The present project has been devoted to creating modeling and design analysis methodology for use in the tailoring process of aircraft structures. Emphasis has been given to creating bend-twist elastic coupling in high aspect ratio wings or other lifting surfaces. The direction of our work was in concert with the overall NASA effort Twenty- First Century Aircraft Technology (TCAT). A multi-disciplinary team was assembled by Dr. Damodar Ambur to work on wing technology, which included our project.

  10. A robust rotorcraft flight control system design methodology utilizing quantitative feedback theory

    NASA Technical Reports Server (NTRS)

    Gorder, Peter James

    1993-01-01

    Rotorcraft flight control systems present design challenges which often exceed those associated with fixed-wing aircraft. First, large variations in the response characteristics of the rotorcraft result from the wide range of airspeeds of typical operation (hover to over 100 kts). Second, the assumption of vehicle rigidity often employed in the design of fixed-wing flight control systems is rarely justified in rotorcraft where rotor degrees of freedom can have a significant impact on the system performance and stability. This research was intended to develop a methodology for the design of robust rotorcraft flight control systems. Quantitative Feedback Theory (QFT) was chosen as the basis for the investigation. Quantitative Feedback Theory is a technique which accounts for variability in the dynamic response of the controlled element in the design robust control systems. It was developed to address a Multiple-Input Single-Output (MISO) design problem, and utilizes two degrees of freedom to satisfy the design criteria. Two techniques were examined for extending the QFT MISO technique to the design of a Multiple-Input-Multiple-Output (MIMO) flight control system (FCS) for a UH-60 Black Hawk Helicopter. In the first, a set of MISO systems, mathematically equivalent to the MIMO system, was determined. QFT was applied to each member of the set simultaneously. In the second, the same set of equivalent MISO systems were analyzed sequentially, with closed loop response information from each loop utilized in subsequent MISO designs. The results of each technique were compared, and the advantages of the second, termed Sequential Loop Closure, were clearly evident.

  11. Integrated circuit layout design methodology for deep sub-wavelength processes

    NASA Astrophysics Data System (ADS)

    Torres Robles, Juan Andres

    One of the critical aspects of semiconductor fabrication is the patterning of multiple design layers onto silicon wafers. Since 180nm processes came online, the semiconductor industry has operated under conditions in which the critical features are smaller than the wavelength of light used during the patterning process. Such sub-wavelength conditions present many challenges because topology, rather than feature width and space, defines the yield characteristics of the devices. Pattern variability can contribute as much as 80% of the total timing margins defined by traditional SPICE corner models. Because feature variability is undesirable from electrical considerations, this work proposes a physical design verification methodology that emphasizes pattern robustness to process variations. This new method is based on a framework composed of manufacturability objects, operators and guidelines, which permits the definition of a scoring system ranking the manufacturing process and the manufacturability of the designs. This framework is intended to alleviate circuit design and verification challenges and it based on three new concepts: the first relates to compact process model requirements. The second involves the definition of a new design object, called pv-Band, which reflects layout sensitivity to process variations. The third is the specification of two manufacturability metrics that, when optimized, can improve yield by accounting layout sensitivities across multiple design levels (e.g., Active, polysilicon, contact, metal 1, etc.). By integrating these new concepts (process models, pv-Bands and manufacturability metrics) with existing knowledge, this work moves forward the state-of-the-art of physical design and verification of integrated circuits subject to sub-wavelength effects.

  12. Development of a design methodology for pipelines in ice scoured seabeds

    SciTech Connect

    Clark, J.I.; Paulin, M.J.; Lach, P.R.; Yang, Q.S.; Poorooshasb, H.

    1994-12-31

    Large areas of the continental shelf of northern oceans are frequently scoured or gouged by moving bodies of ice such as icebergs and sea ice keels associated with pressure ridges. This phenomenon presents a formidable challenge when the route of a submarine pipeline is intersected by the scouring ice. It is generally acknowledged that if a pipeline, laid on the seabed, were hit by an iceberg or a pressure ridge keel, the forces imposed on the pipeline would be much greater than it could practically withstand. The pipeline must therefore be buried to avoid direct contact with ice, but it is very important to determine with some assurance the minimum depth required for safety for both economical and environmental reasons. The safe burial depth of a pipeline, however, cannot be determined directly from the relatively straight forward measurement of maximum scour depth. The major design consideration is the determination of the potential sub-scour deformation of the ice scoured soil. Forces transmitted through the soil and soil displacement around the pipeline could load the pipeline to failure if not taken into account in the design. If the designer can predict the forces transmitted through the soil, the pipeline can be designed to withstand these external forces using conventional design practice. In this paper, the authors outline a design methodology that is based on phenomenological studies of ice scoured terrain, both modern and relict, laboratory tests, centrifuge modeling, and numerical analysis. The implications of these studies, which could assist in the safe and economical design of pipelines in ice scoured terrain, will also be discussed.

  13. A Test Methodology for Determining Space-Readiness of Xilinx SRAM-Based FPGA Designs

    SciTech Connect

    Quinn, Heather M; Graham, Paul S; Morgan, Keith S; Caffrey, Michael P

    2008-01-01

    Using reconfigurable, static random-access memory (SRAM) based field-programmable gate arrays (FPGAs) for space-based computation has been an exciting area of research for the past decade. Since both the circuit and the circuit's state is stored in radiation-tolerant memory, both could be alterd by the harsh space radiation environment. Both the circuit and the circuit's state can be prote cted by triple-moduler redundancy (TMR), but applying TMR to FPGA user designs is often an error-prone process. Faulty application of TMR could cause the FPGA user circuit to output incorrect data. This paper will describe a three-tiered methodology for testing FPGA user designs for space-readiness. We will describe the standard approach to testing FPGA user designs using a particle accelerator, as well as two methods using fault injection and a modeling tool. While accelerator testing is the current 'gold standard' for pre-launch testing, we believe the use of fault injection and modeling tools allows for easy, cheap and uniform access for discovering errors early in the design process.

  14. Development of a decision-making methodology to design a water quality monitoring network.

    PubMed

    Keum, Jongho; Kaluarachchi, Jagath J

    2015-07-01

    The number of water quality monitoring stations in the USA has decreased over the past few decades. Scarcity of observations can easily produce prediction uncertainty due to unreliable model calibration. An effective water quality monitoring network is important not only for model calibration and water quality prediction but also for resources management. Redundant or improperly located monitoring stations may cause increased monitoring costs without improvement to the understanding of water quality in watersheds. In this work, a decision-making methodology is proposed to design a water quality monitoring network by providing an adequate number of monitoring stations and their approximate locations at the eight-digit hydrologic unit codes (HUC8) scale. The proposed methodology is demonstrated for an example at the Upper Colorado River Basin (UCRB), where salinity is a serious concern. The level of monitoring redundancy or scarcity is defined by an index, station ratio (SR), which represents a monitoring density based on water quality load originated within a subbasin. By comparing the number of stations from a selected target SR with the available number of stations including the actual and the potential stations, the suggested number of stations in each subbasin was decided. If monitoring stations are primarily located in the low salinity loading subbasins, the average actual SR tends to increase, and vice versa. Results indicate that the spatial distribution of monitoring locations in 2011 is concentrated on low salinity loading subbasins, and therefore, additional monitoring is required for the high salinity loading subbasins. The proposed methodology shows that the SR is a simple and a practical indicator for monitoring density.

  15. Development of a decision-making methodology to design a water quality monitoring network.

    PubMed

    Keum, Jongho; Kaluarachchi, Jagath J

    2015-07-01

    The number of water quality monitoring stations in the USA has decreased over the past few decades. Scarcity of observations can easily produce prediction uncertainty due to unreliable model calibration. An effective water quality monitoring network is important not only for model calibration and water quality prediction but also for resources management. Redundant or improperly located monitoring stations may cause increased monitoring costs without improvement to the understanding of water quality in watersheds. In this work, a decision-making methodology is proposed to design a water quality monitoring network by providing an adequate number of monitoring stations and their approximate locations at the eight-digit hydrologic unit codes (HUC8) scale. The proposed methodology is demonstrated for an example at the Upper Colorado River Basin (UCRB), where salinity is a serious concern. The level of monitoring redundancy or scarcity is defined by an index, station ratio (SR), which represents a monitoring density based on water quality load originated within a subbasin. By comparing the number of stations from a selected target SR with the available number of stations including the actual and the potential stations, the suggested number of stations in each subbasin was decided. If monitoring stations are primarily located in the low salinity loading subbasins, the average actual SR tends to increase, and vice versa. Results indicate that the spatial distribution of monitoring locations in 2011 is concentrated on low salinity loading subbasins, and therefore, additional monitoring is required for the high salinity loading subbasins. The proposed methodology shows that the SR is a simple and a practical indicator for monitoring density. PMID:26113203

  16. Egs Exploration Methodology Development Using the Dixie Valley Geothermal Wellfield as a Calibration Site, a Progress Report

    NASA Astrophysics Data System (ADS)

    Iovenitti, J. L.; Blackwell, D. D.; Sainsbury, J.; Tibuleac, I. M.; Waibel, A.; Cladouhos, T. T.; Karlin, R. E.; Kennedy, B. M.; Isaaks, E.; Wannamaker, P. E.; Clyne, M.; Callahan, O.

    2011-12-01

    An Engineered Geothermal System (EGS) exploration methodology is being developed using the Dixie Valley geothermal system in Nevada as a field laboratory. This area was chosen as the test site because its has an extensive public domain database and deep geothermal wells allowing for calibration of the developed methodology. The calibration effort is focused on the Dixie Valley Geothermal Wellfield (DVGW), an area with 30 geothermal wells. Calibration will be based on cross-correlation of qualitative and quantitative results with known well conditions. This project is structured in the following manner (Task 1) review and assess existing public domain and other available data (baseline data); (Task 2) develop and populate a GIS-database; (Task 3) develop a baseline (existing public domain data) geothermal conceptual model, evaluate the geostatistical relationships between the various data sets, and generate a Baseline EGS favorability map from the surface to a 5-km depth focused on identifying EGS drilling targets; (Task 4) collect new gravity, seismic, magneto-tellurics (MT), geologic, and geochemical data to fill in data gaps and improve model resolution; and (Task 5) update the GIS-database for the newly acquired data and repeating the elements of Task 3 incorporating the baseline and new data to generate an Enhanced EGS Favorability Map. Innovative aspects of this project include: (1) developing interdisciplinary method(s) for synthesizing, integrating, and evaluating geoscience data both qualitatively and quantitatively; (2) demonstrating new seismic techniques based on ambient noise which is a passive survey not requiring local earthquakes and is a relatively inexpensive method to image seismic velocity, attenuation, and density; (3) determining if seismic data can infer temperature and lithology at depth; (4) extending 2D MT modeling/mapping to 3D MT; (5) generating a MT derived temperature map; and (6) jointly analyzing gravity, magnetic, seismic, and MT

  17. Assessment of an effective quasirelativistic methodology designed to study astatine chemistry in aqueous solution.

    PubMed

    Champion, Julie; Seydou, Mahamadou; Sabatié-Gogova, Andrea; Renault, Eric; Montavon, Gilles; Galland, Nicolas

    2011-09-01

    A cost-effective computational methodology designed to study astatine (At) chemistry in aqueous solution has been established. It is based on two-component spin-orbit density functional theory calculations and solvation calculations using the conductor-like polarizable continuum model in conjunction with specific astatine cavities. Theoretical calculations are confronted with experimental data measured for complexation reactions between metallic forms of astatine (At(+) and AtO(+)) and inorganic ligands (Cl(-), Br(-) and SCN(-)). For each reaction, both 1:1 and 1:2 complexes are evidenced. The experimental trends regarding the thermodynamic constants (K) can be reproduced qualitatively and quantitatively. The mean signed error on computed Log K values is -0.4, which corresponds to a mean signed error smaller than 1 kcal mol(-1) on free energies of reaction. Theoretical investigations show that the reactivity of cationic species of astatine is highly sensitive to spin-orbit coupling and solvent effects. At the moment, the presented computational methodology appears to be the only tool to gain an insight into astatine chemistry at a molecular level. PMID:21769335

  18. Assessment of an effective quasirelativistic methodology designed to study astatine chemistry in aqueous solution.

    PubMed

    Champion, Julie; Seydou, Mahamadou; Sabatié-Gogova, Andrea; Renault, Eric; Montavon, Gilles; Galland, Nicolas

    2011-09-01

    A cost-effective computational methodology designed to study astatine (At) chemistry in aqueous solution has been established. It is based on two-component spin-orbit density functional theory calculations and solvation calculations using the conductor-like polarizable continuum model in conjunction with specific astatine cavities. Theoretical calculations are confronted with experimental data measured for complexation reactions between metallic forms of astatine (At(+) and AtO(+)) and inorganic ligands (Cl(-), Br(-) and SCN(-)). For each reaction, both 1:1 and 1:2 complexes are evidenced. The experimental trends regarding the thermodynamic constants (K) can be reproduced qualitatively and quantitatively. The mean signed error on computed Log K values is -0.4, which corresponds to a mean signed error smaller than 1 kcal mol(-1) on free energies of reaction. Theoretical investigations show that the reactivity of cationic species of astatine is highly sensitive to spin-orbit coupling and solvent effects. At the moment, the presented computational methodology appears to be the only tool to gain an insight into astatine chemistry at a molecular level.

  19. Formal Learning Sequences and Progression in the Studio: A Framework for Digital Design Education

    ERIC Educational Resources Information Center

    Wärnestål, Pontus

    2016-01-01

    This paper examines how to leverage the design studio learning environment throughout long-term Digital Design education in order to support students to progress from tactical, well-defined, device-centric routine design, to confidently design sustainable solutions for strategic, complex, problems for a wide range of devices and platforms in the…

  20. Designing reasonable accommodation of the workplace: a new methodology based on risk assessment.

    PubMed

    Pigini, L; Andrich, R; Liverani, G; Bucciarelli, P; Occhipinti, E

    2010-05-01

    If working tasks are carried out in inadequate conditions, workers with functional limitations may, over time, risk developing further disabilities. While several validated risk assessment methods exist for able-bodied workers, few studies have been carried out for workers with disabilities. This article, which reports the findings of a Study funded by the Italian Ministry of Labour, proposes a general methodology for the technical and organisational re-design of a worksite, based on risk assessment and irrespective of any worker disability. To this end, a sample of 16 disabled workers, composed of people with either mild or severe motor disabilities, was recruited. Their jobs include business administration (5), computer programmer (1), housewife (1), mechanical worker (2), textile worker (1), bus driver (1), nurse (2), electrical worker (1), teacher (1), warehouseman (1). By using a mix of risk assessment methods and the International Classification of Functioning (ICF) taxonomy, their worksites were re-designed in view of a reasonable accommodation, and prospective evaluation was carried out to check whether the new design would eliminate the risks. In one case - a man with congenital malformations who works as a help-desk operator for technical assistance in the Information and Communication Technology (ICT) department of a big organisation - the accommodation was actually carried out within the time span of the study, thus making it possible to confirm the hypotheses raised in the prospective assessment.

  1. Designing reasonable accommodation of the workplace: a new methodology based on risk assessment.

    PubMed

    Pigini, L; Andrich, R; Liverani, G; Bucciarelli, P; Occhipinti, E

    2010-05-01

    If working tasks are carried out in inadequate conditions, workers with functional limitations may, over time, risk developing further disabilities. While several validated risk assessment methods exist for able-bodied workers, few studies have been carried out for workers with disabilities. This article, which reports the findings of a Study funded by the Italian Ministry of Labour, proposes a general methodology for the technical and organisational re-design of a worksite, based on risk assessment and irrespective of any worker disability. To this end, a sample of 16 disabled workers, composed of people with either mild or severe motor disabilities, was recruited. Their jobs include business administration (5), computer programmer (1), housewife (1), mechanical worker (2), textile worker (1), bus driver (1), nurse (2), electrical worker (1), teacher (1), warehouseman (1). By using a mix of risk assessment methods and the International Classification of Functioning (ICF) taxonomy, their worksites were re-designed in view of a reasonable accommodation, and prospective evaluation was carried out to check whether the new design would eliminate the risks. In one case - a man with congenital malformations who works as a help-desk operator for technical assistance in the Information and Communication Technology (ICT) department of a big organisation - the accommodation was actually carried out within the time span of the study, thus making it possible to confirm the hypotheses raised in the prospective assessment. PMID:20131973

  2. [Principles and methodology for ecological rehabilitation and security pattern design in key project construction].

    PubMed

    Chen, Li-Ding; Lu, Yi-He; Tian, Hui-Ying; Shi, Qian

    2007-03-01

    Global ecological security becomes increasingly important with the intensive human activities. The function of ecological security is influenced by human activities, and in return, the efficiency of human activities will also be affected by the patterns of regional ecological security. Since the 1990s, China has initiated the construction of key projects "Yangtze Three Gorges Dam", "Qinghai-Tibet Railway", "West-to-East Gas Pipeline", "West-to-East Electricity Transmission" and "South-to-North Water Transfer" , etc. The interaction between these projects and regional ecological security has particularly attracted the attention of Chinese government. It is not only important for the regional environmental protection, but also of significance for the smoothly implementation of various projects aimed to develop an ecological rehabilitation system and to design a regional ecological security pattern. This paper made a systematic analysis on the types and characteristics of key project construction and their effects on the environment, and on the basis of this, brought forward the basic principles and methodology for ecological rehabilitation and security pattern design in this construction. It was considered that the following issues should be addressed in the implementation of a key project: 1) analysis and evaluation of current regional ecological environment, 2) evaluation of anthropogenic disturbances and their ecological risk, 3) regional ecological rehabilitation and security pattern design, 4) scenario analysis of environmental benefits of regional ecological security pattern, 5) re-optimization of regional ecological system framework, and 6) establishment of regional ecosystem management plan.

  3. Design methodology accounting for fabrication errors in manufactured modified Fresnel lenses for controlled LED illumination.

    PubMed

    Shim, Jongmyeong; Kim, Joongeok; Lee, Jinhyung; Park, Changsu; Cho, Eikhyun; Kang, Shinill

    2015-07-27

    The increasing demand for lightweight, miniaturized electronic devices has prompted the development of small, high-performance optical components for light-emitting diode (LED) illumination. As such, the Fresnel lens is widely used in applications due to its compact configuration. However, the vertical groove angle between the optical axis and the groove inner facets in a conventional Fresnel lens creates an inherent Fresnel loss, which degrades optical performance. Modified Fresnel lenses (MFLs) have been proposed in which the groove angles along the optical paths are carefully controlled; however, in practice, the optical performance of MFLs is inferior to the theoretical performance due to fabrication errors, as conventional design methods do not account for fabrication errors as part of the design process. In this study, the Fresnel loss and the loss area due to microscopic fabrication errors in the MFL were theoretically derived to determine optical performance. Based on this analysis, a design method for the MFL accounting for the fabrication errors was proposed. MFLs were fabricated using an ultraviolet imprinting process and an injection molding process, two representative processes with differing fabrication errors. The MFL fabrication error associated with each process was examined analytically and experimentally to investigate our methodology. PMID:26367631

  4. Robust design of spot welds in automotive structures: A decision-making methodology

    NASA Astrophysics Data System (ADS)

    Ouisse, M.; Cogan, S.

    2010-05-01

    Automotive structures include thousands of spot welds whose design must allow the assembled vehicle to satisfy a wide variety of performance constraints including static, dynamic and crash criteria. The objective of a standard optimization strategy is to reduce the number of spot welds as much as possible while satisfying all the design objectives. However, a classical optimization of the spot weld distribution using an exhaustive search approach is simply not feasible due to the very high order of the design space and the subsequently prohibitive calculation costs. Moreover, even if this calculation could be done, the result would not necessarily be very informative with respect to the design robustness to manufacturing uncertainties (location of welds and defective welds) and to the degradation of spot welds due to fatigue effects over the lifetime of the vehicle. In this paper, a decision-making methodology is presented which allows some aspects of the robustness issues to be integrated into the spot weld design process. The starting point is a given distribution of spot welds on the structure, which is based on both engineering know-how and preliminary critical numerical results, in particular criteria such as crash behavior. An over-populated spot weld distribution is then built in order to satisfy the remaining design criteria, such as static torsion angle and modal behavior. Then, an efficient optimization procedure based on energy considerations is used to eliminate redundant spot welds while preserving as far as possible the nominal structural behavior. The resulting sub-optimal solution is then used to provide a decision indicator for defining effective quality control procedures (e.g. visual post-assembly inspection of a small number of critical spot welds) as well as designing redundancy into critical zones. The final part of the paper is related to comparing the robustness of competing designs. Some decision-making indicators are presented to help the

  5. A game-based decision support methodology for competitive systems design

    NASA Astrophysics Data System (ADS)

    Briceno, Simon Ignacio

    This dissertation describes the development of a game-based methodology that facilitates the exploration and selection of research and development (R&D) projects under uncertain competitive scenarios. The proposed method provides an approach that analyzes competitor positioning and formulates response strategies to forecast the impact of technical design choices on a project's market performance. A critical decision in the conceptual design phase of propulsion systems is the selection of the best architecture, centerline, core size, and technology portfolio. This selection can be challenging when considering evolving requirements from both the airframe manufacturing company and the airlines in the market. Furthermore, the exceedingly high cost of core architecture development and its associated risk makes this strategic architecture decision the most important one for an engine company. Traditional conceptual design processes emphasize performance and affordability as their main objectives. These areas alone however, do not provide decision-makers with enough information as to how successful their engine will be in a competitive market. A key objective of this research is to examine how firm characteristics such as their relative differences in completing R&D projects, differences in the degree of substitutability between different project types, and first/second-mover advantages affect their product development strategies. Several quantitative methods are investigated that analyze business and engineering strategies concurrently. In particular, formulations based on the well-established mathematical field of game theory are introduced to obtain insights into the project selection problem. The use of game theory is explored in this research as a method to assist the selection process of R&D projects in the presence of imperfect market information. The proposed methodology focuses on two influential factors: the schedule uncertainty of project completion times and

  6. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  7. Response surface methodology and process optimization of sustained release pellets using Taguchi orthogonal array design and central composite design

    PubMed Central

    Singh, Gurinder; Pai, Roopa S.; Devi, V. Kusum

    2012-01-01

    Furosemide is a powerful diuretic and antihypertensive drug which has low bioavailability due to hepatic first pass metabolism and has a short half-life of 2 hours. To overcome the above drawback, the present study was carried out to formulate and evaluate sustained release (SR) pellets of furosemide for oral administration prepared by extrusion/spheronization. Drug Coat L-100 was used within the pellet core along with microcrystalline cellulose as the diluent and concentration of selected binder was optimized to be 1.2%. The formulation was prepared with drug to polymer ratio 1:3. It was optimized using Design of Experiments by employing a 32 central composite design that was used to systematically optimize the process parameters combined with response surface methodology. Dissolution studies were carried out with USP apparatus Type I (basket type) in both simulated gastric and intestinal pH. The statistical technique, i.e., the two-tailed paired t test and one-way ANOVA of in vitro data has proposed that there was very significant (P≤0.05) difference in dissolution profile of furosemide SR pellets when compared with pure drug and commercial product. Validation of the process optimization study indicated an extremely high degree of prognostic ability. The study effectively undertook the development of optimized process parameters of pelletization of furosemide pellets with tremendous SR characteristics. PMID:22470891

  8. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  9. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE PAGESBeta

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  10. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    SciTech Connect

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease states in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.

  11. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets

    PubMed Central

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-01-01

    Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426

  12. The Component Packaging Problem: A Vehicle for the Development of Multidisciplinary Design and Analysis Methodologies

    NASA Technical Reports Server (NTRS)

    Fadel, Georges; Bridgewood, Michael; Figliola, Richard; Greenstein, Joel; Kostreva, Michael; Nowaczyk, Ronald; Stevenson, Steve

    1999-01-01

    This report summarizes academic research which has resulted in an increased appreciation for multidisciplinary efforts among our students, colleagues and administrators. It has also generated a number of research ideas that emerged from the interaction between disciplines. Overall, 17 undergraduate students and 16 graduate students benefited directly from the NASA grant: an additional 11 graduate students were impacted and participated without financial support from NASA. The work resulted in 16 theses (with 7 to be completed in the near future), 67 papers or reports mostly published in 8 journals and/or presented at various conferences (a total of 83 papers, presentations and reports published based on NASA inspired or supported work). In addition, the faculty and students presented related work at many meetings, and continuing work has been proposed to NSF, the Army, Industry and other state and federal institutions to continue efforts in the direction of multidisciplinary and recently multi-objective design and analysis. The specific problem addressed is component packing which was solved as a multi-objective problem using iterative genetic algorithms and decomposition. Further testing and refinement of the methodology developed is presently under investigation. Teaming issues research and classes resulted in the publication of a web site, (http://design.eng.clemson.edu/psych4991) which provides pointers and techniques to interested parties. Specific advantages of using iterative genetic algorithms, hurdles faced and resolved, and institutional difficulties associated with multi-discipline teaming are described in some detail.

  13. Development of designer chicken shred with response surface methodology and evaluation of its quality characteristics.

    PubMed

    Reddy, K Jalarama; Jayathilakan, K; Pandey, M C

    2016-01-01

    Meat is considered to be an excellent source of protein, essential minerals, trace elements and vitamins but negative concerns regarding meat consumption and its impact on human health have promoted research into development of novel functional meat products. In the present study Rice bran oil (RBO), and Flaxseed oil (FSO) were used for attaining an ideal lipid profile in the product. The experiment was designed to optimise the RBO and FSO concentration for development of product with ideal lipid profile and maximum acceptability by the application of central composite rotatable design of Response surface methodology (RSM). Levels of RBO and FSO were taken as independent variables and overall acceptability (OAA), n-6 and n-3 fatty acids as responses. Quadratic fit model was found to be suitable for optimising the product. Sample with RBO (20.51 ml) and FSO (2.57 ml) yielded an OAA score of 8.25, 29.54 % of n-6 and 7.70 % of n-3 having n-6/n-3 ratio as 3.8:1. Optimised product was analysed for physico-chemical, sensory and microbial profile during storage at 4 ± 1 °C for 30 days. Increase in the lipid oxidative parameters was observed during storage but it was not significant (p < 0.05). Studies revealed great potential of developing functional poultry products with improved nutritional quality and good shelf stability by incorporating RBO and FSO.

  14. A Rapid Python-Based Methodology for Target-Focused Combinatorial Library Design.

    PubMed

    Li, Shiliang; Song, Yuwei; Liu, Xiaofeng; Li, Honglin

    2016-01-01

    The chemical space is so vast that only a small portion of it has been examined. As a complementary approach to systematically probe the chemical space, virtual combinatorial library design has extended enormous impacts on generating novel and diverse structures for drug discovery. Despite the favorable contributions, high attrition rates in drug development that mainly resulted from lack of efficacy and side effects make it increasingly challenging to discover good chemical starting points. In most cases, focused libraries, which are restricted to particular regions of the chemical space, are deftly exploited to maximize hit rate and improve efficiency at the beginning of the drug discovery and drug development pipeline. This paper presented a valid methodology for fast target-focused combinatorial library design in both reaction-based and production-based ways with the library creating rates of approximately 70,000 molecules per second. Simple, quick and convenient operating procedures are the specific features of the method. SHAFTS, a hybrid 3D similarity calculation software, was embedded to help refine the size of the libraries and improve hit rates. Two target-focused (p38-focused and COX2-focused) libraries were constructed efficiently in this study. This rapid library enumeration method is portable and applicable to any other targets for good chemical starting points identification collaborated with either structure-based or ligand-based virtual screening.

  15. Applications of a damage tolerance analysis methodology in aircraft design and production

    NASA Technical Reports Server (NTRS)

    Woodward, M. R.; Owens, S. D.; Law, G. E.; Mignery, L. A.

    1992-01-01

    Objectives of customer mandated aircraft structural integrity initiatives in design are to guide material selection, to incorporate fracture resistant concepts in the design, to utilize damage tolerance based allowables and planned inspection procedures necessary to enhance the safety and reliability of manned flight vehicles. However, validated fracture analysis tools for composite structures are needed to accomplish these objectives in a timely and economical manner. This paper briefly describes the development, validation, and application of a damage tolerance methodology for composite airframe structures. A closed-form analysis code, entitled SUBLAM was developed to predict the critical biaxial strain state necessary to cause sublaminate buckling-induced delamination extension in an impact damaged composite laminate. An embedded elliptical delamination separating a thin sublaminate from a thick parent laminate is modelled. Predicted failure strains were correlated against a variety of experimental data that included results from compression after impact coupon and element tests. An integrated analysis package was developed to predict damage tolerance based margin-of-safety (MS) using NASTRAN generated loads and element information. Damage tolerance aspects of new concepts are quickly and cost-effectively determined without the need for excessive testing.

  16. Towards a unified approach to the design of knowledge based agile manufacturing systems: Part 1 - methodology

    SciTech Connect

    Jones, A.H.; Uzam, M.

    1996-12-31

    To date there are no general techniques available to design Knowledge Based Discrete Event Control Systems. In this paper, a new technique is proposed which solves the problem. The generality of the technique means that the method can be applied to any complex (multi-component) Discrete Event Control problem and can easily accommodate diagnostics and reconfiguration. The technique involves firstly, defining the complex Discrete Event Control system as a colored Petri net controller and then converting colored Petri net controller into a colored Token Passing Logic Controller via Token Passing Logic (TPL) technique and finally, representing the colored Token Passing Logic Controller as rules within a control knowledge base for use within a concurrent inference engine. The technique is described by considering the fundamental structures inherent in colored Petri net control design and shows how to convert these structures into a knowledge base suitable for Discrete Event Control. Moreover, a context sensitive concurrent inference engine is also proposed to ensure the correct processing of the control knowledge base. An illustrative example of how this methodology can be applied to a complex discrete event control problem is described in Part II.

  17. Cirrhosis progression as a model of accelerated senescence: affecting the biological aging clock by a breakthrough biophysical methodology.

    PubMed

    Marineo, G; Marotta, F; Sisti, G

    2004-06-01

    To test new treatment modalities, a pilot study with a novel noninvasive biophysical methodology (Delta-S DVD) that can artificially exert a "decrease of entropy" through the patented electromagnetic-driven delivery of "energy clusters" was designed. This process has been modulated and integrated by the body as a "self" source to support the energy-dependent functional stores, thus modifying reparative into regenerative mechanisms of liver parenchyma. Seven long-standing hepatitis C virus-positive (Child A-B) cirrhosis patients with overt symptoms and portal hypertension and failure or side effects of antiviral drug treatment underwent 40-min sessions of Delta-S DVD daily for six months and were followed up monthly. At the end of the first month, rapid improvement of symptoms and a decrease of portal hypertension were noted. At the end of treatment, all patients showed either a complete (80%) or a partial (20%) regression of fatigue (FISK score), peripheral edema, pruritus, and palmar erythema. As observed, despite having stopped beta-blockers, F1 esophageal varices disappeared (60%), whereas F2 decreased to F1. The Doppler ultrasound aspect of partial (40%) or total (20%) atrophy was either reduced (60%) or reverted to normal (20%), and the respiratory dynamics of the portal vein improved (80%) or normalized (20%), whereas gross scarring nodules disappeared in 40% of cases. These promising data pave the way for an innovative physiopathological approach with extensive clinical applications.

  18. A methodology for system-of-systems design in support of the engineering team

    NASA Astrophysics Data System (ADS)

    Ridolfi, G.; Mooij, E.; Cardile, D.; Corpino, S.; Ferrari, G.

    2012-04-01

    Space missions have experienced a trend of increasing complexity in the last decades, resulting in the design of very complex systems formed by many elements and sub-elements working together to meet the requirements. In a classical approach, especially in a company environment, the two steps of design-space exploration and optimization are usually performed by experts inferring on major phenomena, making assumptions and doing some trial-and-error runs on the available mathematical models. This is done especially in the very early design phases where most of the costs are locked-in. With the objective of supporting the engineering team and the decision-makers during the design of complex systems, the authors developed a modelling framework for a particular category of complex, coupled space systems called System-of-Systems. Once modelled, the System-of-Systems is solved using a computationally cheap parametric methodology, named the mixed-hypercube approach, based on the utilization of a particular type of fractional factorial design-of-experiments, and analysis of the results via global sensitivity analysis and response surfaces. As an applicative example, a system-of-systems of a hypothetical human space exploration scenario for the support of a manned lunar base is presented. The results demonstrate that using the mixed-hypercube to sample the design space, an optimal solution is reached with a limited computational effort, providing support to the engineering team and decision makers thanks to sensitivity and robustness information. The analysis of the system-of-systems model that was implemented shows that the logistic support of a human outpost on the Moon for 15 years is still feasible with currently available launcher classes. The results presented in this paper have been obtained in cooperation with Thales Alenia Space—Italy, in the framework of a regional programme called STEPS. STEPS—Sistemi e Tecnologie per l'EsPlorazione Spaziale is a research

  19. "Filming in Progress": New Spaces for Multimodal Designing

    ERIC Educational Resources Information Center

    Mills, Kathy A.

    2010-01-01

    Global trends call for new research to investigate multimodal designing mediated by new technologies and the implications for classroom spaces. This article addresses the relationship between new technologies, students' multimodal designing, and the social production of classroom spaces. Multimodal semiotics and sociological principles are applied…

  20. Progress and prospects for an FI relevant point design

    SciTech Connect

    Key, M; Amendt, P; Bellei, C; Clark, D; Cohen, B; Divol, L; Ho, D; Kemp, A; Larson, D; Marinak, M; Patel, P; Shay, H; Strozzi, D; Tabak, M

    2011-11-02

    The physics issues involved in scaling from sub ignition to high gain fast ignition are discussed. Successful point designs must collimate the electrons and minimize the stand off distance to avoid multi mega-joule ignition energies. Collimating B field configurations are identified and some initial designs are explored.

  1. Perspectives on the design and methodology of periconceptional nutrient supplementation trials.

    PubMed

    Brabin, Bernard J; Gies, Sabine; Owens, Stephen; Claeys, Yves; D'Alessandro, Umberto; Tinto, Halidou; Brabin, Loretta

    2016-01-01

    Periconceptional supplementation could extend the period over which maternal and fetal nutrition is improved, but there are many challenges facing early-life intervention studies. Periconceptional trials differ from pregnancy supplementation trials, not only because of the very early or pre-gestational timing of nutrient exposure but also because they generate subsidiary information on participants who remain non-pregnant. The methodological challenges are more complex although, if well designed, they provide opportunities to evaluate concurrent hypotheses related to the health of non-pregnant women, especially nulliparous adolescents. This review examines the framework of published and ongoing randomised trial designs. Four cohorts typically arise from the periconceptional trial design--two of which are non-pregnant and two are pregnant--and this structure provides assessment options related to pre-pregnant, maternal, pregnancy and fetal outcomes. Conceptually the initial decision for single or micronutrient intervention is central--as is the choice of dosage and content--in order to establish a comparative framework across trials, improve standardisation, and facilitate interpretation of mechanistic hypotheses. Other trial features considered in the review include: measurement options for baseline and outcome assessments; adherence to long-term supplementation; sample size considerations in relation to duration of nutrient supplementation; cohort size for non-pregnant and pregnant cohorts as the latter is influenced by parity selection; integrating qualitative studies and data management issues. Emphasis is given to low resource settings where high infection rates and the possibility of nutrient-infection interactions may require appropriate safety monitoring. The focus is on pragmatic issues that may help investigators planning a periconceptional trial. PMID:26833080

  2. My Interventional Drug-Eluting Stent Educational App (MyIDEA): Patient-Centered Design Methodology

    PubMed Central

    Shroff, Adhir; Groo, Vicki; Dickens, Carolyn; Field, Jerry; Baumann, Matthew; Welland, Betty; Gutowski, Gerry; Flores Jr, Jose D; Zhao, Zhongsheng; Bahroos, Neil; Hynes, Denise M; Wilkie, Diana J

    2015-01-01

    Background Patient adherence to medication regimens is critical in most chronic disease treatment plans. This study uses a patient-centered tablet app, “My Interventional Drug-Eluting Stent Educational App (MyIDEA).” This is an educational program designed to improve patient medication adherence. Objective Our goal is to describe the design, methodology, limitations, and results of the MyIDEA tablet app. We created a mobile technology-based patient education app to improve dual antiplatelet therapy adherence in patients who underwent a percutaneous coronary intervention and received a drug-eluting stent. Methods Patient advisers were involved in the development process of MyIDEA from the initial wireframe to the final launch of the product. The program was restructured and redesigned based on the patient advisers’ suggestions as well as those from multidisciplinary team members. To accommodate those with low health literacy, we modified the language and employed attractive color schemes to improve ease of use. We assumed that the target patient population may have little to no experience with electronic tablets, and therefore, we designed the interface to be as intuitive as possible. Results The MyIDEA app has been successfully deployed to a low-health-literate elderly patient population in the hospital setting. A total of 6 patients have interacted with MyIDEA for an average of 17.6 minutes/session. Conclusions Including patient advisers in the early phases of a mobile patient education development process is critical. A number of changes in text order, language, and color schemes occurred to improve ease of use. The MyIDEA program has been successfully deployed to a low-health-literate elderly patient population. Leveraging patient advisers throughout the development process helps to ensure implementation success. PMID:26139587

  3. Progress in the planar CPn SOFC system design verification

    SciTech Connect

    Elangovan, S.; Hartvigsen, J.; Khandkar, A.

    1996-04-01

    SOFCo is developing a high efficiency, modular and scaleable planar SOFC module termed the CPn design. This design has been verified in a 1.4 kW module test operated directly on pipeline natural gas. The design features multistage oxidation of fuel wherein the fuel is consumed incrementally over several stages. High efficiency is achieved by uniform current density distribution per stage, which lowers the stack resistance. Additional benefits include thermal regulation and compactness. Test results from stack modules operating in pipeline natural gas are presented.

  4. Flexible energy-storage devices: design consideration and recent progress.

    PubMed

    Wang, Xianfu; Lu, Xihong; Liu, Bin; Chen, Di; Tong, Yexiang; Shen, Guozhen

    2014-07-23

    Flexible energy-storage devices are attracting increasing attention as they show unique promising advantages, such as flexibility, shape diversity, light weight, and so on; these properties enable applications in portable, flexible, and even wearable electronic devices, including soft electronic products, roll-up displays, and wearable devices. Consequently, considerable effort has been made in recent years to fulfill the requirements of future flexible energy-storage devices, and much progress has been witnessed. This review describes the most recent advances in flexible energy-storage devices, including flexible lithium-ion batteries and flexible supercapacitors. The latest successful examples in flexible lithium-ion batteries and their technological innovations and challenges are reviewed first. This is followed by a detailed overview of the recent progress in flexible supercapacitors based on carbon materials and a number of composites and flexible micro-supercapacitors. Some of the latest achievements regarding interesting integrated energy-storage systems are also reviewed. Further research direction is also proposed to surpass existing technological bottle-necks and realize idealized flexible energy-storage devices. PMID:24913891

  5. Flexible energy-storage devices: design consideration and recent progress.

    PubMed

    Wang, Xianfu; Lu, Xihong; Liu, Bin; Chen, Di; Tong, Yexiang; Shen, Guozhen

    2014-07-23

    Flexible energy-storage devices are attracting increasing attention as they show unique promising advantages, such as flexibility, shape diversity, light weight, and so on; these properties enable applications in portable, flexible, and even wearable electronic devices, including soft electronic products, roll-up displays, and wearable devices. Consequently, considerable effort has been made in recent years to fulfill the requirements of future flexible energy-storage devices, and much progress has been witnessed. This review describes the most recent advances in flexible energy-storage devices, including flexible lithium-ion batteries and flexible supercapacitors. The latest successful examples in flexible lithium-ion batteries and their technological innovations and challenges are reviewed first. This is followed by a detailed overview of the recent progress in flexible supercapacitors based on carbon materials and a number of composites and flexible micro-supercapacitors. Some of the latest achievements regarding interesting integrated energy-storage systems are also reviewed. Further research direction is also proposed to surpass existing technological bottle-necks and realize idealized flexible energy-storage devices.

  6. [Research progress of probe design software of oligonucleotide microarrays].

    PubMed

    Chen, Xi; Wu, Zaoquan; Liu, Zhengchun

    2014-02-01

    DNA microarray has become an essential medical genetic diagnostic tool for its high-throughput, miniaturization and automation. The design and selection of oligonucleotide probes are critical for preparing gene chips with high quality. Several sets of probe design software have been developed and are available to perform this work now. Every set of the software aims to different target sequences and shows different advantages and limitations. In this article, the research and development of these sets of software are reviewed in line with three main criteria, including specificity, sensitivity and melting temperature (Tm). In addition, based on the experimental results from literatures, these sets of software are classified according to their applications. This review will be helpful for users to choose an appropriate probe-design software. It will also reduce the costs of microarrays, improve the application efficiency of microarrays, and promote both the research and development (R&D) and commercialization of high-performance probe design software.

  7. Design of clinical trials in acute kidney injury: report from an NIDDK workshop on trial methodology.

    PubMed

    Palevsky, Paul M; Molitoris, Bruce A; Okusa, Mark D; Levin, Adeera; Waikar, Sushrut S; Wald, Ron; Chertow, Glenn M; Murray, Patrick T; Parikh, Chirag R; Shaw, Andrew D; Go, Alan S; Faubel, Sarah G; Kellum, John A; Chinchilli, Vernon M; Liu, Kathleen D; Cheung, Alfred K; Weisbord, Steven D; Chawla, Lakhmir S; Kaufman, James S; Devarajan, Prasad; Toto, Robert M; Hsu, Chi-yuan; Greene, Tom; Mehta, Ravindra L; Stokes, John B; Thompson, Aliza M; Thompson, B Taylor; Westenfelder, Christof S; Tumlin, James A; Warnock, David G; Shah, Sudhir V; Xie, Yining; Duggan, Emily G; Kimmel, Paul L; Star, Robert A

    2012-05-01

    Acute kidney injury (AKI) remains a complex clinical problem associated with significant short-term morbidity and mortality and lacking effective pharmacologic interventions. Patients with AKI experience longer-term risks for progressive chronic ESRD, which diminish patients' health-related quality of life and create a larger burden on the healthcare system. Although experimental models have yielded numerous promising agents, translation into clinical practice has been unsuccessful, possibly because of issues in clinical trial design, such as delayed drug administration, masking of therapeutic benefit by adverse events, and inadequate sample size. To address issues of clinical trial design, the National Institute of Diabetes and Digestive and Kidney Diseases sponsored a workshop titled "Clinical Trials in Acute Kidney Injury: Current Opportunities and Barriers" in December 2010. Workshop participants included representatives from academia, industry, and government agencies whose areas of expertise spanned basic science, clinical nephrology, critical care medicine, biostatistics, pharmacology, and drug development. This document summarizes the discussions of collaborative workgroups that addressed issues related to patient selection, study endpoints, the role of novel biomarkers, sample size and power calculations, and adverse events and pilot/feasibility studies in prevention and treatment of AKI. Companion articles outline the discussions of workgroups for model trials related to prevention or treatment of established AKI in different clinical settings, such as in patients with sepsis.

  8. Progress Toward Efficient Laminar Flow Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Campbell, Matthew L.; Streit, Thomas

    2011-01-01

    A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center. The overall approach consists of the loose coupling of a flow solver, a transition prediction method and a design module using shell scripts, along with interface modules to prepare the input for each method. This approach allows the user to select the flow solver and transition prediction module, as well as run mode for each code, based on the fidelity most compatible with the problem and available resources. The design module can be any method that designs to a specified target pressure distribution. In addition to the interface modules, two new components have been developed: 1) an efficient, empirical transition prediction module (MATTC) that provides n-factor growth distributions without requiring boundary layer information; and 2) an automated target pressure generation code (ATPG) that develops a target pressure distribution that meets a variety of flow and geometry constraints. The ATPG code also includes empirical estimates of several drag components to allow the optimization of the target pressure distribution. The current system has been developed for the design of subsonic and transonic airfoils and wings, but may be extendable to other speed ranges and components. Several analysis and design examples are included to demonstrate the current capabilities of the system.

  9. Progress and Design Status of the ITER MSE Diagnostic

    SciTech Connect

    Makowski, M A; Allen, S L; Holcomb, C T; Lerner, S; Morris, K; Wong, N

    2008-05-07

    The Motional Stark Effect (MSE) diagnostic will be essential for the study of advanced scenarios on ITER and its design is currently underway. In order meet the ITER MSE diagnostic design requirements, two approaches for the measurement are under consideration. The first is based on standard polarimeter techniques to measure the polarization of the emitted light, whereas the second measures the Stark splitting from which |B| can be inferred, where |B| is the magnitude of the total magnetic field. The baseline design of the optical system is centered on the first approach. Emphasis in this case is placed on minimizing the polarization aberrations of the optical relay system. Motivation for the second method results from concern that the optical properties of the plasma-facing mirror, particularly its diattenuation and retardance, will degrade with plasma exposure. The second approach, while less sensitive to aberrations induced by plasma exposure effects, requires greater optical throughput in order to measure the complete Stark spectrum. We have developed optimized designs for both techniques and will present a comparison of them and discuss the associated design trade-offs.

  10. Hybrid intelligent methodology to design translation invariant morphological operators for Brazilian stock market prediction.

    PubMed

    Araújo, Ricardo de A

    2010-12-01

    This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods.

  11. Online intelligent controllers for an enzyme recovery plant: design methodology and performance.

    PubMed

    Leite, M S; Fujiki, T L; Silva, F V; Fileti, A M F

    2010-12-27

    This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity.

  12. A methodology for using nonlinear aerodynamics in aeroservoelastic analysis and design

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1991-01-01

    A methodology is presented for using the Volterra-Wiener theory of nonlinear systems in aeroservoelastic (ASE) analyses and design. The theory is applied to the development of nonlinear aerodynamic response models that can be defined in state-space form and are, therefore, appropriate for use in modern control theory. The theory relies on the identification of nonlinear kernels that can be used to predict the response of a nonlinear system due to an arbitrary input. A numerical kernel identification technique, based on unit impulse responses, is presented and applied to a simple bilinear, single-input single-output (SISO) system. The linear kernel (unit impulse response) and the nonlinear second-order kernel of the system are numerically-identified and compared with the exact, analytically-defined and linear and second-order kernels. This kernel identification technique is then applied to the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) code for identification of the linear and second-order kernels of a NACA64A010 rectangular wing undergoing pitch at M = 0.5, M = 8.5 (transonic), and M = 0.93 (transonic). Results presented demonstrate the feasibility of this approach for use with nonlinear, unsteady aerodynamic responses.

  13. Methodological trends in the design of recent microenvironmental studies of personal CO exposure

    NASA Astrophysics Data System (ADS)

    Flachsbart, Peter G.

    This paper describes the designs of three recent microenvironmental studies of personal exposure to carbon monoxide (CO) from motor vehicle exhaust. These studies were conducted sequentially, first in four California cities (Los Angeles, Mountain View, Palo Alto, and San Francisco), then in Honolulu, and, most recently, in metropolitan Washington, D.C. Though study purposes differed, each study faced common methodological issues related to personal exposure monitors (PEMs), quality assurance and data collection procedures, and the selection of microenvironments for study. Two major objectives of the California cities study were to determine the CO concentrations typically found in commercial settings and to define and classify microenvironments applicable to such settings: The Honolulu study measured merchant exposure to CO in shopping centers attached to semienclosed parking garages during business hours and commuter exposure to CO in vehicles (passenger cars and buses) on congested roadways during peak periods. The intent of the Washington study was to develop a model of commuter exposure to motor vehicle exhaust using CO as an indicator pollutant. Certain trends are discernible from reviewing the three studies. There are clearly trends in PEM development that have expanded instrument capabilities and automated data collection and storage. There are also trends towards more rigorous quality assurance procedures and more standardized protocols for collecting exposure data. Further, one can see a trend towards more elaborate indicators for identifying microenvironments for study. Finally, there is a trend towards using personal monitors in public policy review and evaluation.

  14. A methodology for formulating a minimal uncertainty model for robust control system design and analysis

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1989-01-01

    In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.

  15. Design, Development and Optimization of S (-) Atenolol Floating Sustained Release Matrix Tablets Using Surface Response Methodology

    PubMed Central

    Gunjal, P. T.; Shinde, M. B.; Gharge, V. S.; Pimple, S. V.; Gurjar, M. K.; Shah, M. N.

    2015-01-01

    The objective of this present investigation was to develop and formulate floating sustained release matrix tablets of s (-) atenolol, by using different polymer combinations and filler, to optimize by using surface response methodology for different drug release variables and to evaluate the drug release pattern of the optimized product. Floating sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: Hydroxypropyl methylcellulose, sodium bicarbonate as a gas generating agent, polyvinyl pyrrolidone as a binder and lactose monohydrate as filler. The 32 full factorial design was employed to investigate the effect of formulation variables on different properties of tablets applicable to floating lag time, buoyancy time, % drug release in 1 and 6 h (D1 h,D6 h) and time required to 90% drug release (t90%). Significance of result was analyzed using analysis of non variance and P < 0.05 was considered statistically significant. S (-) atenolol floating sustained release matrix tablets followed the Higuchi drug release kinetics that indicates the release of drug follows anomalous (non-Fickian) diffusion mechanism. The developed floating sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet. PMID:26798171

  16. Application-specific coarse-grained reconfigurable array: architecture and design methodology

    NASA Astrophysics Data System (ADS)

    Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu

    2015-06-01

    Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.

  17. Design, Development and Optimization of S (-) Atenolol Floating Sustained Release Matrix Tablets Using Surface Response Methodology.

    PubMed

    Gunjal, P T; Shinde, M B; Gharge, V S; Pimple, S V; Gurjar, M K; Shah, M N

    2015-01-01

    The objective of this present investigation was to develop and formulate floating sustained release matrix tablets of s (-) atenolol, by using different polymer combinations and filler, to optimize by using surface response methodology for different drug release variables and to evaluate the drug release pattern of the optimized product. Floating sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: Hydroxypropyl methylcellulose, sodium bicarbonate as a gas generating agent, polyvinyl pyrrolidone as a binder and lactose monohydrate as filler. The 3(2) full factorial design was employed to investigate the effect of formulation variables on different properties of tablets applicable to floating lag time, buoyancy time, % drug release in 1 and 6 h (D1 h,D6 h) and time required to 90% drug release (t90%). Significance of result was analyzed using analysis of non variance and P < 0.05 was considered statistically significant. S (-) atenolol floating sustained release matrix tablets followed the Higuchi drug release kinetics that indicates the release of drug follows anomalous (non-Fickian) diffusion mechanism. The developed floating sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet. PMID:26798171

  18. Methodological and ethical considerations in designing an Internet study of quality of life: a discussion paper.

    PubMed

    Holmes, Susan

    2009-03-01

    Use of the Internet in research is a relatively new phenomenon offering a potentially valuable research resource that, although increasingly used, appears largely untapped in nursing and healthcare more generally. This paper discusses methodological and ethical issues that need consideration when designing an Internet-based study concluding that, in general, online research methods are simply adaptations of traditional methods of data collection. Issues such as the representativeness of the data and ethical concerns are discussed. It considers whether the ethical dilemmas faced by online researchers differ from those faced by those seeking to use other, more 'traditional' approaches. Using the example of a study that employed the Internet as a means of distributing questionnaires, this paper shows that this can be an efficient and effective means of gathering data from a geographically dispersed sample. Furthermore, since typewritten data is obtained in the same format from all respondents, the need for transcription and the potential for error are reduced potentially enhancing the quality of any such study.

  19. Online intelligent controllers for an enzyme recovery plant: design methodology and performance.

    PubMed

    Leite, M S; Fujiki, T L; Silva, F V; Fileti, A M F

    2010-01-01

    This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity. PMID:21234106

  20. Online Intelligent Controllers for an Enzyme Recovery Plant: Design Methodology and Performance

    PubMed Central

    Leite, M. S.; Fujiki, T. L.; Silva, F. V.; Fileti, A. M. F.

    2010-01-01

    This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity. PMID:21234106

  1. Validating a new methodology for optical probe design and image registration in fNIRS studies.

    PubMed

    Wijeakumar, Sobanawartiny; Spencer, John P; Bohache, Kevin; Boas, David A; Magnotta, Vincent A

    2015-02-01

    Functional near-infrared spectroscopy (fNIRS) is an imaging technique that relies on the principle of shining near-infrared light through tissue to detect changes in hemodynamic activation. An important methodological issue encountered is the creation of optimized probe geometry for fNIRS recordings. Here, across three experiments, we describe and validate a processing pipeline designed to create an optimized, yet scalable probe geometry based on selected regions of interest (ROIs) from the functional magnetic resonance imaging (fMRI) literature. In experiment 1, we created a probe geometry optimized to record changes in activation from target ROIs important for visual working memory. Positions of the sources and detectors of the probe geometry on an adult head were digitized using a motion sensor and projected onto a generic adult atlas and a segmented head obtained from the subject's MRI scan. In experiment 2, the same probe geometry was scaled down to fit a child's head and later digitized and projected onto the generic adult atlas and a segmented volume obtained from the child's MRI scan. Using visualization tools and by quantifying the amount of intersection between target ROIs and channels, we show that out of 21 ROIs, 17 and 19 ROIs intersected with fNIRS channels from the adult and child probe geometries, respectively. Further, both the adult atlas and adult subject-specific MRI approaches yielded similar results and can be used interchangeably. However, results suggest that segmented heads obtained from MRI scans be used for registering children's data. Finally, in experiment 3, we further validated our processing pipeline by creating a different probe geometry designed to record from target ROIs involved in language and motor processing. PMID:25705757

  2. Development of designer chicken shred with response surface methodology and evaluation of its quality characteristics.

    PubMed

    Reddy, K Jalarama; Jayathilakan, K; Pandey, M C

    2016-01-01

    Meat is considered to be an excellent source of protein, essential minerals, trace elements and vitamins but negative concerns regarding meat consumption and its impact on human health have promoted research into development of novel functional meat products. In the present study Rice bran oil (RBO), and Flaxseed oil (FSO) were used for attaining an ideal lipid profile in the product. The experiment was designed to optimise the RBO and FSO concentration for development of product with ideal lipid profile and maximum acceptability by the application of central composite rotatable design of Response surface methodology (RSM). Levels of RBO and FSO were taken as independent variables and overall acceptability (OAA), n-6 and n-3 fatty acids as responses. Quadratic fit model was found to be suitable for optimising the product. Sample with RBO (20.51 ml) and FSO (2.57 ml) yielded an OAA score of 8.25, 29.54 % of n-6 and 7.70 % of n-3 having n-6/n-3 ratio as 3.8:1. Optimised product was analysed for physico-chemical, sensory and microbial profile during storage at 4 ± 1 °C for 30 days. Increase in the lipid oxidative parameters was observed during storage but it was not significant (p < 0.05). Studies revealed great potential of developing functional poultry products with improved nutritional quality and good shelf stability by incorporating RBO and FSO. PMID:26787966

  3. Rationale, design, and methodology for the optimizing outcomes in women with gestational diabetes mellitus and their infants study

    PubMed Central

    2013-01-01

    Background Women who are diagnosed with gestational diabetes mellitus (GDM) are at increased risk for developing prediabetes and type 2 diabetes mellitus (T2DM). To date, there have been few interdisciplinary interventions that target predominantly ethnic minority low-income women diagnosed with GDM. This paper describes the rationale, design and methodology of a 2-year, randomized, controlled study being conducted in North Carolina. Methods/Design Using a two-group, repeated measures, experimental design, we will test a 14- week intensive intervention on the benefits of breastfeeding, understanding gestational diabetes and risk of progression to prediabetes and T2DM, nutrition and exercise education, coping skills training, physical activity (Phase I), educational and motivational text messaging and 3 months of continued monthly contact (Phase II). A total of 100 African American, non-Hispanic white, and bilingual Hispanic women between 22–36 weeks of pregnancy who are diagnosed with GDM and their infants will be randomized to either the experimental group or the wait-listed control group. The first aim of the study is to determine the feasibility of the intervention. The second aim of study is to test the effects of the intervention on maternal outcomes from baseline (22–36 weeks pregnant) to 10 months postpartum. Primary maternal outcomes will include fasting blood glucose and weight (BMI) from baseline to 10 months postpartum. Secondary maternal outcomes will include clinical, adiposity, health behaviors and self-efficacy outcomes from baseline to 10 months postpartum. The third aim of the study is to quantify the effects of the intervention on infant feeding and growth. Infant outcomes will include weight status and breastfeeding from birth through 10 months of age. Data analysis will include general linear mixed-effects models. Safety endpoints include adverse event reporting. Discussion Findings from this trial may lead to an effective

  4. Design and progress report for compact cryocooled sapphire oscillator 'VCSO'

    NASA Technical Reports Server (NTRS)

    Dick, G. John; Wang, Rabi T.; Tjoelker, Robert L.

    2005-01-01

    We report on the development of a compact cryocooled sapphiere oscillator 'VCSO', designed as a higher-performance replacement for ultra-stable quartz oscillators in local oscillator, cleanup, and flywheel applications in the frequency generation and distribution subsystems of NASA's Deep Space Network (DSN).

  5. Progress in Conceptual Design and Analysis of Advanced Rotorcraft

    NASA Technical Reports Server (NTRS)

    Yamauchi, Gloria K.

    2012-01-01

    This presentation will give information on Multi-Disciplinary Analysis and Technology Development, including it's objectives and how they will be met. In addition, it will also present recent highlights including the Lift-Offset Civil Design and it's study conclusions, as well as, the LCTR2 Propulsion Concept's study conclusions. Recent publications and future publications will also be discussed.

  6. Progress and Design Concerns of Nanostructured Solar Energy Harvesting Devices.

    PubMed

    Leung, Siu-Fung; Zhang, Qianpeng; Tavakoli, Mohammad Mahdi; He, Jin; Mo, Xiaoliang; Fan, Zhiyong

    2016-05-01

    Integrating devices with nanostructures is considered a promising strategy to improve the performance of solar energy harvesting devices such as photovoltaic (PV) devices and photo-electrochemical (PEC) solar water splitting devices. Extensive efforts have been exerted to improve the power conversion efficiencies (PCE) of such devices by utilizing novel nanostructures to revolutionize device structural designs. The thicknesses of light absorber and material consumption can be substantially reduced because of light trapping with nanostructures. Meanwhile, the utilization of nanostructures can also result in more effective carrier collection by shortening the photogenerated carrier collection path length. Nevertheless, performance optimization of nanostructured solar energy harvesting devices requires a rational design of various aspects of the nanostructures, such as their shape, aspect ratio, periodicity, etc. Without this, the utilization of nanostructures can lead to compromised device performance as the incorporation of these structures can result in defects and additional carrier recombination. The design guidelines of solar energy harvesting devices are summarized, including thin film non-uniformity on nanostructures, surface recombination, parasitic absorption, and the importance of uniform distribution of photo-generated carriers. A systematic view of the design concerns will assist better understanding of device physics and benefit the fabrication of high performance devices in the future.

  7. Methodological, Theoretical, Infrastructural, and Design Issues in Conducting Good Outcome Studies

    ERIC Educational Resources Information Center

    Kelly, Michael P.; Moore, Tessa A.

    2011-01-01

    This article outlines a set of methodological, theoretical, and other issues relating to the conduct of good outcome studies. The article begins by considering the contribution of evidence-based medicine to the methodology of outcome research. The lessons which can be applied in outcome studies in nonmedical settings are described. The article…

  8. Innovative Mixed-Methods Research: Moving beyond Design Technicalities to Epistemological and Methodological Realizations

    ERIC Educational Resources Information Center

    Riazi, A. Mehdi

    2016-01-01

    Mixed-methods research (MMR), as an inter-discourse (quantitative and qualitative) methodology, can provide applied linguistics researchers the opportunity to draw on and integrate the strengths of the two research methodological approaches in favour of making more rigorous inferences about research problems. In this article, the argument is made…

  9. Development of a systematic and practical methodology for the design of vehicles semi-active suspension control system

    NASA Astrophysics Data System (ADS)

    Bolandhemmat, Hamidreza; Clark, Christopher M.; Golnaraghi, Farid

    2010-05-01

    In this paper, a novel systematic and practical methodology is presented for design of vehicle semi-active suspension systems. Typically, the semi-active control strategies developed to improve vehicle ride comfort and stability have a switching nature. This makes the design of the controlled suspension systems difficult and highly dependent on an extensive trial-and-error process. The proposed methodology maps the discontinuous control system model to a continuous linear region, where all the time and frequency design techniques, established in the conventional control system theory, can be applied. If the semi-active control system is designed to satisfy some ride and stability requirements, an inverse mapping offers the ultimate control law. At the end, the entire design procedure is summarised in six steps. The effectiveness of the proposed methodology in the design of a semi-active suspension system for a Cadillac SRX 2005 is demonstrated with road tests results. Real-time experiments confirm that the use of the newly developed systematic design method reduces the required time and effort in real industrial problems.

  10. Optimizing spacecraft design - optimization engine development : progress and plans

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim

    2003-01-01

    At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.

  11. PROGRESS WITH NSLS-II INJECTION STRAIGHT SECTION DESIGN

    SciTech Connect

    Shaftan, T.; Blednykh, A.; Casey, B.; Dalesio, B.; Faussete, R.; Ferreira, M.; Fliller, R.; Ganetis, G.; Heese, R.; Hseuh, H.-C.; Job, P.K.; Johnson, E.; Kosciuk, B.; Kowalski, S.; Padrazo, D.; Parker, B.; Pinayev, I.; Sharma, S.; Singh, O.; Spataro, C.

    2011-03-28

    The NSLS-II injection straight section (SR) consists of pulsed and DC bumps, septa system, beam trajectory correction and diagnostics systems. In this paper we discuss overall injection straight layout, preliminary element designs, specifications for the pulsed and DC magnets and their power supplies, vacuum devices and chambers and diagnostics devices. Prior to selecting the current 'conventional' design of the injection straight section we analyzed an option of injection via pulsed multipole pioneered at PF-AR. We found that this promising approach was not suited to the NSLS-II storage ring optics, since it would require a impractically compact arrangement of the injection straight section components and a complex modification of the transport line optics due to the strong focusing of the injected beam passing off the pulsed multipole axis. In addition, the requirement for a small injection transient of the stored beam orbit severely constrains the vertical alignment tolerance of the pulsed multipole. The design of the NSLS-II injection straight section is now completed with exception of transition chamber details, which will be adjusted to accommodate the actual layouts of the pulsed magnets.

  12. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 1: Executive summary and technical narrative

    NASA Technical Reports Server (NTRS)

    Pieper, Jerry L.; Walker, Richard E.

    1993-01-01

    During the past three decades, an enormous amount of resources were expended in the design and development of Liquid Oxygen/Hydrocarbon and Hydrogen (LOX/HC and LOX/H2) rocket engines. A significant portion of these resources were used to develop and demonstrate the performance and combustion stability for each new engine. During these efforts, many analytical and empirical models were developed that characterize design parameters and combustion processes that influence performance and stability. Many of these models are suitable as design tools, but they have not been assembled into an industry-wide usable analytical design methodology. The objective of this program was to assemble existing performance and combustion stability models into a usable methodology capable of producing high performing and stable LOX/hydrocarbon and LOX/hydrogen propellant booster engines.

  13. Revised Design-Based Research Methodology for College Course Improvement and Application to Education Courses in Japan

    ERIC Educational Resources Information Center

    Akahori, Kanji

    2011-01-01

    The author describes a research methodology for college course improvement, and applies the results to education courses. In Japan, it is usually difficult to carry out research on college course improvement, because faculty cannot introduce experimental design approaches based on control and treatment groupings of students in actual classroom…

  14. Introduction to the Design and Optimization of Experiments Using Response Surface Methodology. A Gas Chromatography Experiment for the Instrumentation Laboratory

    ERIC Educational Resources Information Center

    Lang, Patricia L.; Miller, Benjamin I.; Nowak, Abigail Tuttle

    2006-01-01

    The study describes how to design and optimize an experiment with multiple factors and multiple responses. The experiment uses fractional factorial analysis as a screening experiment only to identify important instrumental factors and does not use response surface methodology to find the optimal set of conditions.

  15. A computer modeling methodology and tool for assessing design concepts for the Space Station Data Management System

    NASA Technical Reports Server (NTRS)

    Jones, W. R.

    1986-01-01

    A computer modeling tool is being developed to assess candidate designs for the Space Station Data Management System (DMS). The DMS is to be a complex distributed computer system including the processor, storage devices, local area networks, and software that will support all processing functions onboard the Space Station. The modeling tool will allow a candidate design for the DMS, or for other subsystems that use the DMS, to be evaluated in terms of parameters. The tool and its associated modeling methodology are intended for use by DMS and subsystem designers to perform tradeoff analyses between design concepts using varied architectures and technologies.

  16. Design methodology for integrated downstream separation systems in an ethanol biorefinery

    NASA Astrophysics Data System (ADS)

    Mohammadzadeh Rohani, Navid

    and obtaining energy security. On the other hand, Process Integration (PI) as defined by Natural Resource Canada as the combination of activities which aim at improving process systems, their unit operations and their interactions in order to maximize the efficiency of using water, energy and raw materials can also help biorefineries lower their energy consumptions and improve their economics. Energy integration techniques such as pinch analysis adopted by different industries over the years have ensured using heat sources within a plant to supply the demand internally and decrease the external utility consumption. Therefore, adopting energy integration can be one of the ways biorefinery technology owners can consider in their process development as well as their business model in order to improve their overall economics. The objective of this thesis is to propose a methodology for designing integrated downstream separation in a biorefinery. This methodology is tested in an ethanol biorefinery case study. Several alternative separation techniques are evaluated in their energy consumption and economics in three different scenarios; stand-alone without energy integration, stand-alone with internal energy integration and integrated-with Kraft. The energy consumptions and capital costs of separation techniques are assessed in each scenario and the cost and benefit of integration are determined and finally the best alternative is found through techno-economic metrics. Another advantage of this methodology is the use of a graphical tool which provides insights on decreasing energy consumption by modifying the process condition. The pivot point of this work is the use of a novel energy integration method called Bridge analysis. This systematic method which originally is intended for retrofit situation is used here for integration with Kraft process. Integration potentials are identified through this method and savings are presented for each design. In stand-alone with

  17. Progress summary of LHD engineering design and construction

    NASA Astrophysics Data System (ADS)

    Motojima, O.; Akaishi, K.; Chikaraishi, H.; Funaba, H.; Hamaguchi, S.; Imagawa, S.; Inagaki, S.; Inoue, N.; Iwamoto, A.; Kitagawa, S.; Komori, A.; Kubota, Y.; Maekawa, R.; Masuzaki, S.; Mito, T.; Miyazawa, J.; Morisaki, T.; Murai, K.; Muroga, T.; Nagasaka, T.; Nakamura, Y.; Nishimura, A.; Nishimura, K.; Noda, N.; Ohyabu, N.; Sagara, A.; Sakakibara, S.; Sakamoto, R.; Satoh, S.; Satow, T.; Shoji, M.; Suzuki, H.; Takahata, K.; Tamura, H.; Watanabe, K. Y.; Yamada, H.; Yamada, S.; Yamaguchi, S.; Yamazaki, K.; Yanagi, N.; Baba, T.; Hayashi, H.; Iima, M.; Inoue, T.; Kato, S.; Kato, T.; Kondo, T.; Moriuchi, S.; Ogawa, H.; Ohtake, I.; Ooba, K.; Sekiguchi, H.; Suzuki, N.; Takami, S.; Taniguchi, Y.; Tsuzuki, T.; Yamamoto, N.; Yasui, K.; Yonezu, H.; Fujiwara, M.; Iiyoshi, A.

    2000-03-01

    In March 1998, the LHD project finally completed its eight year construction schedule. LHD is a superconducting (SC) heliotron type device with R = 3.9 m, ap = 0.6 m and B = 3 T, which has simple and continuous large helical coils. The major mission of LHD is to demonstrate the high potential of currentless helical-toroidal plasmas, which are free from current disruption and have an intrinsic potential for steady state operation. After intensive physics design studies in the 1980s, the necessary programmes of SC engineering R&D was carried out, and as a result, LHD fabrication technologies were successfully developed. In this process, a significant database on fusion engineering has been established. Achievements have been made in various areas, such as the technologies of SC conductor development, SC coil fabrication, liquid He and supercritical He cryogenics, development of low temperature structural materials and welding, operation and control, and power supply systems and related SC coil protection schemes. They are integrated, and nowadays comprise a major part of the LHD relevant fusion technology area. These issues correspond to the technological database necessary for the next step of future reactor designs. In addition, this database could be increased with successful commissioning tests just after the completion of the LHD machine assembly phase, which consisted of a vacuum leak test, an LHe cooldown test and a coil current excitation test. These LHD relevant engineering developments are recapitulated and highlighted. To summarize the construction of LHD as an SC device, the critical design with NbTi SC material has been successfully accomplished by these R&D activities, which enable a new regime of fusion experiments to be entered.

  18. Tornado missile simulation and design methodology. Volume 2: model verification and data base updates. Final report

    SciTech Connect

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments.

  19. A Review of Three Decades of Doctoral Studies Using the Principal Instructional Management Rating Scale: A Lens on Methodological Progress in Educational Leadership

    ERIC Educational Resources Information Center

    Hallinger, Philip

    2011-01-01

    Background: This report continues in the lineage of reviews of research in educational leadership and management by examining methodological approaches used by doctoral researchers in studying principal instructional leadership. Research Design: The article reviews the full set of 130 doctoral dissertations completed over the past three decades…

  20. Multiplexed actuation using ultra dielectrophoresis for proteomics applications: a comprehensive electrical and electrothermal design methodology.

    PubMed

    Emaminejad, Sam; Dutton, Robert W; Davis, Ronald W; Javanmard, Mehdi

    2014-06-21

    In this work, we present a methodological approach to analyze an enhanced dielectrophoresis (DEP) system from both a circuit analysis and electrothermal view points. In our developed model, we have taken into account various phenomena and constraints such as voltage degradation (due to the presence of the protecting oxide layer), oxide breakdown, instrumentation limitations, and thermal effects. The results from this analysis are applicable generally to a wide variety of geometries and high voltage microsystems. Here, these design guidelines were applied to develop a robust electronic actuation system to perform a multiplexed bead-based protein assay. To carry out the multiplexed functionality, along a single microfluidic channel, an array of proteins is patterned, where each element is targeting a specific secondary protein coated on micron-sized beads in the subsequently introduced sample solution. Below each element of the array, we have a pair of addressable interdigitated electrodes. By selectively applying voltage at the terminals of each interdigitated electrode pair, the enhanced DEP, or equivalently 'ultra'-DEP (uDEP) force detaches protein-bound beads from each element of the array, one by one, without disturbing the bound beads in the neighboring regions. The detached beads can be quantified optically or electrically downstream. For proof of concept, we illustrated 16-plex actuation capability of our device to elute micron-sized beads that are bound to the surface through anti-IgG and IgG interaction which is on the same order of magnitude in strength as typical antibody-antigen interactions. In addition to its application in multiplexed protein analysis, our platform can be potentially utilized to statistically characterize the strength profile of biological bonds, since the multiplexed format allows for high throughput force spectroscopy using the array of uDEP devices, under the same buffer and assay preparation conditions. PMID:24801800

  1. Rationale, Design, Methodology and Hospital Characteristics of the First Gulf Acute Heart Failure Registry (Gulf CARE)

    PubMed Central

    Sulaiman, Kadhim J.; Panduranga, Prashanth; Al-Zakwani, Ibrahim; Alsheikh-Ali, Alawi; Al-Habib, Khalid; Al-Suwaidi, Jassim; Al-Mahmeed, Wael; Al-Faleh, Husam; El-Asfar, Abdelfatah; Al-Motarreb, Ahmed; Ridha, Mustafa; Bulbanat, Bassam; Al-Jarallah, Mohammed; Bazargani, Nooshin; Asaad, Nidal; Amin, Haitham

    2014-01-01

    Background: There is paucity of data on heart failure (HF) in the Gulf Middle East. The present paper describes the rationale, design, methodology and hospital characteristics of the first Gulf acute heart failure registry (Gulf CARE). Materials and Methods: Gulf CARE is a prospective, multicenter, multinational registry of patients >18 year of age admitted with diagnosis of acute HF (AHF). The data collected included demographics, clinical characteristics, etiology, precipitating factors, management and outcomes of patients admitted with AHF. In addition, data about hospital readmission rates, procedures and mortality at 3 months and 1-year follow-up were recorded. Hospital characteristics and care provider details were collected. Data were entered in a dedicated website using an electronic case record form. Results: A total of 5005 consecutive patients were enrolled from February 14, 2012 to November 13, 2012. Forty-seven hospitals in 7 Gulf States (Oman, Saudi Arabia, Yemen, Kuwait, United Gulf Emirates, Qatar and Bahrain) participated in the project. The majority of hospitals were community hospitals (46%; 22/47) followed by non-University teaching (32%; 15/47 and University hospitals (17%). Most of the hospitals had intensive or coronary care unit facilities (93%; 44/47) with 59% (28/47) having catheterization laboratory facilities. However, only 29% (14/47) had a dedicated HF clinic facility. Most patients (71%) were cared for by a cardiologist. Conclusions: Gulf CARE is the first prospective registry of AHF in the Middle East, intending to provide a unique insight into the demographics, etiology, management and outcomes of AHF in the Middle East. HF management in the Middle East is predominantly provided by cardiologists. The data obtained from this registry will help the local clinicians to identify the deficiencies in HF management as well as provide a platform to implement evidence based preventive and treatment strategies to reduce the burden of HF in

  2. Neutralization of red mud with pickling waste liquor using Taguchi's design of experimental methodology.

    PubMed

    Rai, Suchita; Wasewar, Kailas L; Lataye, Dilip H; Mishra, Rajshekhar S; Puttewar, Suresh P; Chaddha, Mukesh J; Mahindiran, P; Mukhopadhyay, Jyoti

    2012-09-01

    'Red mud' or 'bauxite residue', a waste generated from alumina refinery is highly alkaline in nature with a pH of 10.5-12.5. Red mud poses serious environmental problems such as alkali seepage in ground water and alkaline dust generation. One of the options to make red mud less hazardous and environmentally benign is its neutralization with acid or an acidic waste. Hence, in the present study, neutralization of alkaline red mud was carried out using a highly acidic waste (pickling waste liquor). Pickling waste liquor is a mixture of strong acids used for descaling or cleaning the surfaces in steel making industry. The aim of the study was to look into the feasibility of neutralization process of the two wastes using Taguchi's design of experimental methodology. This would make both the wastes less hazardous and safe for disposal. The effect of slurry solids, volume of pickling liquor, stirring time and temperature on the neutralization process were investigated. The analysis of variance (ANOVA) shows that the volume of the pickling liquor is the most significant parameter followed by quantity of red mud with 69.18% and 18.48% contribution each respectively. Under the optimized parameters, pH value of 7 can be achieved by mixing the two wastes. About 25-30% of the total soda from the red mud is being neutralized and alkalinity is getting reduced by 80-85%. Mineralogy and morphology of the neutralized red mud have also been studied. The data presented will be useful in view of environmental concern of red mud disposal. PMID:22751850

  3. Progress toward a rationally designed, chemically powered rotary molecular motor.

    PubMed

    Kelly, T Ross; Cai, Xiaolu; Damkaci, Fehmi; Panicker, Sreeletha B; Tu, Bin; Bushell, Simon M; Cornella, Ivan; Piggott, Matthew J; Salives, Richard; Cavero, Marta; Zhao, Yajun; Jasmin, Serge

    2007-01-17

    Building on prototype 1, which achieves 120 degrees of phosgene-powered unidirectional rotation to rotamer 6 (see Figure 5 in the full article), 7 was designed to accomplish repeated unidirectional rotation (see Scheme 7). Compound 7 contains an amino group on each blade of the triptycene and a 4-(dimethylamino)pyridine (DMAP) unit to selectively deliver phosgene (or its equivalent) to the amine in the "firing position". The synthesis of 7 is described: the key constructive steps are a benzyne addition to an anthracene to generate the triptycene, a stilbene photocyclization to construct the helicene, and a Stille coupling to incorporate the DMAP unit. The DMAP unit was shown to regioselectively relay 1,1'-carbonyldiimidazole (but not phosgene) to the proximal amino group, as designed, but rotation of the triptycene does not occur. Extensive attempts to troubleshoot the problem led to the conclusion that the requisite intramolecular urethane formation, as demonstrated in the prototype (1 --> 4), does not occur with 7 (to give 85) or 97 (to give 100). We speculate that either (i) hydrogen bonding between the hydroxypropyl group and functionality present in 7 but absent from 1 or (ii) a Bürgi-Dunitz (or similar) interaction involving the DMAP (see 106) prevents achievement of a conformation conducive to intramolecular urethane formation. PMID:17212418

  4. Computer-Aided Drug Design (CADD): Methodological Aspects and Practical Applications in Cancer Research

    NASA Astrophysics Data System (ADS)

    Gianti, Eleonora

    Computer-Aided Drug Design (CADD) has deservedly gained increasing popularity in modern drug discovery (Schneider, G.; Fechner, U. 2005), whether applied to academic basic research or the pharmaceutical industry pipeline. In this work, after reviewing theoretical advancements in CADD, we integrated novel and stateof- the-art methods to assist in the design of small-molecule inhibitors of current cancer drug targets, specifically: Androgen Receptor (AR), a nuclear hormone receptor required for carcinogenesis of Prostate Cancer (PCa); Signal Transducer and Activator of Transcription 5 (STAT5), implicated in PCa progression; and Epstein-Barr Nuclear Antigen-1 (EBNA1), essential to the Epstein Barr Virus (EBV) during latent infections. Androgen Receptor. With the aim of generating binding mode hypotheses for a class (Handratta, V.D. et al. 2005) of dual AR/CYP17 inhibitors (CYP17 is a key enzyme for androgens biosynthesis and therefore implicated in PCa development), we successfully implemented a receptor-based computational strategy based on flexible receptor docking (Gianti, E.; Zauhar, R.J. 2012). Then, with the ultimate goal of identifying novel AR binders, we performed Virtual Screening (VS) by Fragment-Based Shape Signatures, an improved version of the original method developed in our Laboratory (Zauhar, R.J. et al. 2003), and we used the results to fully assess the high-level performance of this innovative tool in computational chemistry. STAT5. The SRC Homology 2 (SH2) domain of STAT5 is responsible for phospho-peptide recognition and activation. As a keystone of Structure-Based Drug Design (SBDD), we characterized key residues responsible for binding. We also generated a model of STAT5 receptor bound to a phospho-peptide ligand, which was validated by docking publicly known STAT5 inhibitors. Then, we performed Shape Signatures- and docking-based VS of the ZINC database (zinc.docking.org), followed by Molecular Mechanics Generalized Born Surface Area (MMGBSA

  5. GOING UNDERGROUND IN FINLAND: DESIGN OF ONKALO IN PROGRESS

    SciTech Connect

    Dikds, T.; Ikonen, A.; Niiranen, S.; Hansen, J.

    2003-02-27

    The long-term program aimed at selection of a site for a deep repository was initiated in Finland in 1983. This program has come to end in 2001 and a new phase aimed at implementation of the geological disposal of spent fuel has been started. In this new phase the first milestone is the application for a construction license for the disposal facility around 2010. To fulfill the needs for detailed design of the disposal system, an underground rock characterization facility (URCF) will be constructed at the representative depth at Olkiluoto. The excavation of this facility will start the work for underground characterization, testing and demonstration, which is planned to be a continuous activity throughout the whole life cycle of the deep repository. The overall objectives for the underground site characterization are (1) verification of the present conclusions on site suitability, (2) definition and identification of suitable rock volumes for repository space and (3) characterization of planned host rock for detailed design, safety assessment and construction planning. The objective for verification aims at assessing that the Olkiluoto site meets the basic criteria for long-term safety and as well the basic requirements for construction and thus justifies the site selection. The two other main objectives are closely related to design of the repository and assessing the long-term safety of the site-specific disposal system. The most important objective of ONKALO should allow an in-depth investigation of the geological environment and to provide the opportunity to allow validation of models at more appropriate scales and conditions than can be achieved from the surface. In some areas, such as in demonstrating operational safety, in acquiring geological information at a repository scale and in constructional and operational feasibility, the ONKALO will provide the only reliable source of in situ data. The depth range envisaged for URCF called ONKALO is between 400 and

  6. The Discovery Channel Telescope: Construction and Design Progress, January 2007

    NASA Astrophysics Data System (ADS)

    Bida, Thomas A.; Millis, R. L.; Smith, B. W.; Dunham, E. W.; Marshall, H.

    2006-12-01

    The Discovery Channel Telescope (DCT) is a 4.2m telescope under construction in northern Arizona. The DCT is located at a new site near Happy Jack at 2361m elevation, which was selected following a lengthy site testing campaign that demonstrated DIMM-characterized median ground-level seeing of 0.84-arcsec FWHM. The DCT science mission includes targeted studies of astrophysical and solar system objects utilizing RC and Nasmyth-mounted imaging and spectroscopic instrumentation, and wide-field surveys of KBO’s, NEA’s, and astrophysical objects with a 2-degree FOV prime focus camera. The DCT facility enclosure and control buildings will be completed soon, including the telescope mount and dome supports, major machinery infrastructure, the instrument laboratory, control and computer rooms, and the auxiliary building for the mirror coating plant. Meanwhile, the effort for final figuring and polishing of the 4.3m ULE meniscus primary mirror blank began in August 2006 at the University of Arizona College of Optical Sciences. The primary mirror and its design support, and the integrated telescope mount model, were finite-element analyzed to optimize the design of the mirror and top-end support configurations. The primary mirror axial and tangential actuators will be fabricated in early 2007 and utilized in the final figure and polish cycle. The prime focus camera design has been refined to achieve atmospheric dispersion-compensated 0.25-arcsec images at 1-degree field radius, from B to I-band, at reduced cost through simplification of glasses to standard types and utilization of spheres on all but two lens surfaces. The Discovery Channel Telescope is a project of the Lowell Observatory with major financial support from Discovery Communications, Inc. (DCI). DCI plans ongoing television programming featuring the construction of the telescope and the research ultimately conducted with the DCT. Lowell Observatory and Discovery Communications are actively seeking additional

  7. PROGRESS IN DESIGN OF THE INSTRUMENTATION AND CONTROL OF THE TOKAMAK COOLING WATER SYSTEM

    SciTech Connect

    Korsah, Kofi; DeVan, Bill; Ashburn, David; Crotts, Brad; Smith, Michael

    2015-01-01

    This paper discusses progress in the design of the control, interlock and safety systems of the Tokamak Cooling Water System (TCWS) for the ITER fusion reactor. The TCWS instrumentation and control (I&C) is one of approximately 200 separate plant I&C systems (e.g., vacuum system I&C, magnets system I&C) that interface to a common central I&C system through standardized networks. Several aspects of the I&C are similar to the I&C of fission-based power plants. However, some of the unique features of the ITER fusion reactor and the TCWS (e.g., high quasi-static magnetic field, need for baking and drying as well as cooling operations), also demand some unique safety and qualification considerations. The paper compares the design strategy/guidelines of the TCWS I&C and the I&C of conventional nuclear power plants. Issues such as safety classifications, independence between control and safety systems, sensor sharing, redundancy, voting schemes, and qualification methodologies are discussed. It is concluded that independence and separation requirements are similar in both designs. However, the voting schemes for safety systems in nuclear power plants typically use 2oo4 (i.e., 4 divisions of safety I&C, any 2 of which is sufficient to trigger a safety action), while 2oo3 voting logic - within each of 2 independent trains - is used in the TCWS I&C. It is also noted that 2oo3 voting is also acceptable in nuclear power plants if adequate risk assessment and reliability is demonstrated. Finally, while qualification requirements provide similar guidance [e.g., both IEC 60780 (invoked in ITER-space), and IEEE 323 (invoked in fission power plant space) provide similar guidance], an important qualification consideration is the susceptibility of I&C to the magnetic fields of ITER. Also, the radiation environments are different. In the case of magnetic fields the paper discusses some options that are being considered.

  8. A Multi-Objective Advanced Design Methodology of Composite Beam-to-Column Joints Subjected to Seismic and Fire Loads

    SciTech Connect

    Pucinotti, Raffaele; Ferrario, Fabio; Bursi, Oreste S.

    2008-07-08

    A multi-objective advanced design methodology dealing with seismic actions followed by fire on steel-concrete composite full strength joints with concrete filled tubes is proposed in this paper. The specimens were designed in detail in order to exhibit a suitable fire behaviour after a severe earthquake. The major aspects of the cyclic behaviour of composite joints are presented and commented upon. The data obtained from monotonic and cyclic experimental tests have been used to calibrate a model of the joint in order to perform seismic simulations on several moment resisting frames. A hysteretic law was used to take into account the seismic degradation of the joints. Finally, fire tests were conducted with the objective to evaluate fire resistance of the connection already damaged by an earthquake. The experimental activity together with FE simulation demonstrated the adequacy of the advanced design methodology.

  9. Fan Atomized Burner design advances & commercial development progress

    SciTech Connect

    Kamath, B.; Butcher, T.A.

    1996-07-01

    As a part of the Oil Heat Research and Development program, sponsored by the US Department of Energy, Brookhaven National Laboratory (BNL) has an on-going interest in advanced combustion technologies. This interest is aimed at: improving the initial efficiency of heating equipment, reducing long term fouling and efficiency degradation, reducing air pollutant emissions, and providing practical low-firing rate technologies which may lead to new, high efficiency oil-fired appliances. The Fan-Atomized Burner (FAB) technology is being developed at BNL as part of this general goal. The Fan-Atomized Burner uses a low pressure, air atomizing nozzle in place of the high pressure nozzle used in conventional burners. Because it is air-atomized the burner can operate at low firing rates without the small passages and reliability concerns of low input pressure nozzles. Because it uses a low pressure nozzle the burner can use a fan in place of the small compressor used in other air-atomized burner designs. High initial efficiency of heating equipment is achieved because the burner can operate at very low excess air levels. These low excess air levels also reduce the formation of sulfuric acid in flames. Sulfuric acid is responsible for scaling and fouling of heat exchanger surfaces.

  10. Progress In NCSX and QPS Design and Construction

    SciTech Connect

    Reiersen, W.; Heitzenroeder, P.; Neilson, G. H.; Nelson, B.; Zarnstorff, M.; Brown, T.; Cole, M.; Chrzanowski, J.; Fogarty, P.; Goranson, P.; Lyon, J.; Schmidt, J.; Strykowsky, R.; Viola, M.; Williamson, D.

    2005-10-20

    The National Compact Stellarator Experiment (NCSX) is being constructed at the Princeton Plasma Physics Laboratory (PPPL) in partnership with the Oak Ridge National Laboratory (ORNL). The stellarator core is designed to produce a compact 3-D plasma that combines stellarator and tokamak physics advantages. The engineering challenges of NCSX stem from its complex geometry. From the project's start in April, 2003 to September, 2004, the fabrication specifications for the project's two long-lead components, the modular coil winding forms and the vacuum vessel, were developed. An industrial manufacturing R&D program refined the processes for their fabrication as well as production cost and schedule estimates. The project passed a series of reviews and established its performance baseline with the Department of Energy. In September 2004, fabrication was approved and contracts for these components were awarded. The suppliers have completed the engineering and tooling preparations and are in production. Meanwhile, the project completed preparations for winding the coils at PPPL by installing a coil manufacturing facility and developing all necessary processes through R&D. The main activities for the next two years will be component manufacture, coil winding, and sub-assembly of the vacuum vessel and coil subsets. Machine sector sub-assembly, machine assembly, and testing will follow, leading to First Plasma in July 2009.

  11. In Silico Design and Biological Evaluation of a Dual Specificity Kinase Inhibitor Targeting Cell Cycle Progression and Angiogenesis

    PubMed Central

    Latham, Antony M.; Kankanala, Jayakanth; Fearnley, Gareth W.; Gage, Matthew C.; Kearney, Mark T.; Homer-Vanniasinkam, Shervanthi; Wheatcroft, Stephen B.; Fishwick, Colin W. G.; Ponnambalam, Sreenivasan

    2014-01-01

    Background Protein kinases play a central role in tumor progression, regulating fundamental processes such as angiogenesis, proliferation and metastasis. Such enzymes are an increasingly important class of drug target with small molecule kinase inhibitors being a major focus in drug development. However, balancing drug specificity and efficacy is problematic with off-target effects and toxicity issues. Methodology We have utilized a rational in silico-based approach to demonstrate the design and study of a novel compound that acts as a dual inhibitor of vascular endothelial growth factor receptor 2 (VEGFR2) and cyclin-dependent kinase 1 (CDK1). This compound acts by simultaneously inhibiting pro-angiogenic signal transduction and cell cycle progression in primary endothelial cells. JK-31 displays potent in vitro activity against recombinant VEGFR2 and CDK1/cyclin B proteins comparable to previously characterized inhibitors. Dual inhibition of the vascular endothelial growth factor A (VEGF-A)-mediated signaling response and CDK1-mediated mitotic entry elicits anti-angiogenic activity both in an endothelial-fibroblast co-culture model and a murine ex vivo model of angiogenesis. Conclusions We deduce that JK-31 reduces the growth of both human endothelial cells and human breast cancer cells in vitro. This novel synthetic molecule has broad implications for development of similar multi-kinase inhibitors with anti-angiogenic and anti-cancer properties. In silico design is an attractive and innovative method to aid such drug discovery. PMID:25393739

  12. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. PMID:27160954

  13. SDSS-IV MaNGA: Survey Design and Progress

    NASA Astrophysics Data System (ADS)

    Yan, Renbin; MaNGA Team

    2016-01-01

    The ongoing SDSS-IV/MaNGA Survey will obtain integral field spectroscopy at a resolution of R~2000 with a wavelength coverage from 3,600A to 10,300A for 10,000 nearby galaxies. Within each 3 degree diameter pointing of the 2.5m Sloan Telescope, we deploy 17 hexagonal fiber bundles with sizes ranging from 12 to 32 arcsec in diameter. The bundles are build with 2 arcsec fibers and have a 56% fill factor. During observations, we obtained sets of exposures at 3 different dither positions to achieve near-critical sampling of the effective point spread function, which has a FWHM about 2.5 arcsec, corresponding to 1-2 kpc for the majority of the galaxies targeted. The flux calibration is done using 12 additional mini-fiber-bundles targeting standard stars simultaneously with science targets, achieving a calibration accuracy better than 5% over 90% of the wavelength range. The target galaxies are selected to ensure uniform spatial coverage in units of effective radii for the majority of the galaxies while maximizing spatial resolution. About 2/3 of the sample is covered out to 1.5Re (primary sample) and 1/3 of the sample covered to 2.5Re (secondary sample). The sample is designed to have approximately equal representation from high and low mass galaxies while maintaining volume-limited selection at fixed absolute magnitudes. We obtain an average S/N of 4 per Angstrom in r-band continuum at a surface brightness of 23 AB arcsec-2. With spectral stacking in an elliptical annulus covering 1-1.5Re, our primary sample galaxies have a median S/N of ~60 per Angstrom in r-band.

  14. Design-Based Research: A Decade of Progress in Education Research?

    ERIC Educational Resources Information Center

    Anderson, Terry; Shattuck, Julie

    2012-01-01

    Design-based research (DBR) evolved near the beginning of the 21st century and was heralded as a practical research methodology that could effectively bridge the chasm between research and practice in formal education. In this article, the authors review the characteristics of DBR and analyze the five most cited DBR articles from each year of this…

  15. Review of the probabilistic failure analysis methodology and other probabilistic approaches for application in aerospace structural design

    NASA Technical Reports Server (NTRS)

    Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.

    1993-01-01

    Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.

  16. Thermal Hydraulics Design and Analysis Methodology for a Solid-Core Nuclear Thermal Rocket Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Chen, Yen-Sen; Cheng, Gary; Ito, Yasushi

    2013-01-01

    Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions. This chapter describes a thermal hydraulics design and analysis methodology developed at the NASA Marshall Space Flight Center, in support of the nuclear thermal propulsion development effort. The objective of this campaign is to bridge the design methods in the Rover/NERVA era, with a modern computational fluid dynamics and heat transfer methodology, to predict thermal, fluid, and hydrogen environments of a hypothetical solid-core, nuclear thermal engine the Small Engine, designed in the 1960s. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics and heat transfer platform, while formulations of flow and heat transfer through porous and solid media were implemented to describe those of hydrogen flow channels inside the solid24 core. Design analyses of a single flow element and the entire solid-core thrust chamber of the Small Engine were performed and the results are presented herein

  17. Drift design methodology and preliminary application for the Yucca Mountain Site Characterization Project; Yucca Mountain Site Characterization Project

    SciTech Connect

    Hardy, M.P.; Bauer, S.J.

    1991-12-01

    Excavation stability in an underground nuclear waste repository is required during construction, emplacement, retrieval (if required), and closure phases to ensure worker health and safety, and to prevent development of potential pathways for radionuclide migration in the post-closure period. Stable excavations are developed by appropriate excavation procedures, design of the room shape, design and installation of rock support reinforcement systems, and implementation of appropriate monitoring and maintenance programs. In addition to the loads imposed by the in situ stress field, the repository drifts will be impacted by thermal loads developed after waste emplacement and, periodically, by seismic loads from naturally occurring earthquakes and underground nuclear events. A priori evaluation of stability is required for design of the ground support system, to confirm that the thermal loads are reasonable, and to support the license application process. In this report, a design methodology for assessing drift stability is presented. This is based on site conditions, together with empirical and analytical methods. Analytical numerical methods are emphasized at this time because empirical data are unavailable for excavations in welded tuff either at elevated temperatures or under seismic loads. The analytical methodology incorporates analysis of rock masses that are systematically jointed, randomly jointed, and sparsely jointed. In situ thermal and seismic loads are considered. Methods of evaluating the analytical results and estimating ground support requirements for all the full range of expected ground conditions are outlines. The results of a preliminary application of the methodology using the limited available data are presented. 26 figs., 55 tabs.

  18. A methodology for evacuation design for urban areas: theoretical aspects and experimentation

    NASA Astrophysics Data System (ADS)

    Russo, F.; Vitetta, A.

    2009-04-01

    This paper proposes an unifying approach for the simulation and design of a transportation system under conditions of incoming safety and/or security. Safety and security are concerned with threats generated by very different factors and which, in turn, generate emergency conditions, such as the 9/11, Madrid and London attacks, the Asian tsunami, and the Katrina hurricane; just considering the last five years. In transportation systems, when exogenous events happen and there is a sufficient interval time between the instant when the event happens and the instant when the event has effect on the population, it is possible to reduce the negative effects with the population evacuation. For this event in every case it is possible to prepare with short and long term the evacuation. For other event it is possible also to plan the real time evacuation inside the general risk methodology. The development of models for emergency conditions in transportation systems has not received much attention in the literature. The main findings in this area are limited to only a few public research centres and private companies. In general, there is no systematic analysis of the risk theory applied in the transportation system. Very often, in practice, the vulnerability and exposure in the transportation system are considered as similar variables, or in other worse cases the exposure variables are treated as vulnerability variables. Models and algorithms specified and calibrated in ordinary conditions cannot be directly applied in emergency conditions under the usual hypothesis considered. This paper is developed with the following main objectives: (a) to formalize the risk problem with clear diversification (for the consequences) in the definition of the vulnerability and exposure in a transportation system; thus the book offers improvements over consolidated quantitative risk analysis models, especially transportation risk analysis models (risk assessment); (b) to formalize a system

  19. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  20. Switching from usual brand cigarettes to a tobacco-heating cigarette or snus: Part 1. Study design and methodology.

    PubMed

    Ogden, Michael W; Marano, Kristin M; Jones, Bobbette A; Stiles, Mitchell F

    2015-01-01

    A randomized, multi-center study was conducted to assess potential improvement in health status measures, as well as changes in biomarkers of tobacco exposure and biomarkers of biological effect, in current adult cigarette smokers switched to tobacco-heating cigarettes, snus or ultra-low machine yield tobacco-burning cigarettes (50/group) evaluated over 24 weeks. Study design, conduct and methodology are presented here along with subjects' disposition, characteristics, compliance and safety results. This design and methodology, evaluating generally healthy adult smokers over a relatively short duration, proved feasible. Findings from this randomized study provide generalized knowledge of the risk continuum among various tobacco products (ClinicalTrials.gov Identifier: NCT02061917). PMID:26525849

  1. Switching from usual brand cigarettes to a tobacco-heating cigarette or snus: Part 1. Study design and methodology

    PubMed Central

    Ogden, Michael W.; Marano, Kristin M.; Jones, Bobbette A.; Stiles, Mitchell F.

    2015-01-01

    Abstract A randomized, multi-center study was conducted to assess potential improvement in health status measures, as well as changes in biomarkers of tobacco exposure and biomarkers of biological effect, in current adult cigarette smokers switched to tobacco-heating cigarettes, snus or ultra-low machine yield tobacco-burning cigarettes (50/group) evaluated over 24 weeks. Study design, conduct and methodology are presented here along with subjects’ disposition, characteristics, compliance and safety results. This design and methodology, evaluating generally healthy adult smokers over a relatively short duration, proved feasible. Findings from this randomized study provide generalized knowledge of the risk continuum among various tobacco products (ClinicalTrials.gov Identifier: NCT02061917). PMID:26525849

  2. Contentious issues in research on trafficked women working in the sex industry: study design, ethics, and methodology.

    PubMed

    Cwikel, Julie; Hoban, Elizabeth

    2005-11-01

    The trafficking of women and children for work in the globalized sex industry is a global social problem. Quality data is needed to provide a basis for legislation, policy, and programs, but first, numerous research design, ethical, and methodological problems must be addressed. Research design issues in studying women trafficked for sex work (WTSW) include how to (a) develop coalitions to fund and support research, (b) maintain a critical stance on prostitution, and therefore WTSW (c) use multiple paradigms and methods to accurately reflect WTSW's reality, (d) present the purpose of the study, and (e) protect respondents' identities. Ethical issues include (a) complications with informed consent procedures, (b) problematic access to WTSW (c) loss of WTSW to follow-up, (d) inability to intervene in illegal acts or human rights violations, and (e) the need to maintain trustworthiness as researchers. Methodological issues include (a) constructing representative samples, (b) managing media interest, and (c) handling incriminating materials about law enforcement and immigration.

  3. Least-cost groundwater remediation design using uncertain hydrogeological information. 1998 annual progress report

    SciTech Connect

    Pinder, G.F.

    1998-06-01

    'The objective of the project is to formulate, test, and evaluate a new approach to the least-cost design of groundwater contamination containment and decontamination systems. The proposed methodology employs robust optimization, the outer-approximation method of non-linear programming, and groundwater flow and transport modeling to find the most cost-effective pump-and-treat design possible given the physical parameters describing the groundwater reservoir are known with uncertainty. The result is a methodology that will provide the least-cost groundwater remediation design possible given a specified set of design objectives and physical and sociological constraints. As of the end of the first year of this 3-year project the author has developed and tested the concept of robust optimization within the framework of least-cost groundwater-contamination-containment design. The outer-approximation method has been employed in this context for the relatively simple linear-constraint case associated with the containment problem. In an effort to enhance the efficiency and applicability of this methodology, a new strategy for selecting the various realizations arising out of the Monte-Carlo underpinnings of the robust-optimization technique has been developed and tested. Based upon observations arising out of this work a yet more promising approach has been discovered. The theoretical foundation for this most recent approach has been, and continues to be, the primary focus of the research.'

  4. Impact Evaluation of Quality Assurance in Higher Education: Methodology and Causal Designs

    ERIC Educational Resources Information Center

    Leiber, Theodor; Stensaker, Bjørn; Harvey, Lee

    2015-01-01

    In this paper, the theoretical perspectives and general methodological elements of impact evaluation of quality assurance in higher education institutions are discussed, which should be a cornerstone of quality development in higher education and contribute to improving the knowledge about the effectiveness (or ineffectiveness) of quality…

  5. Methodological Complications of Matching Designs under Real World Constraints: Lessons from a Study of Deeper Learning

    ERIC Educational Resources Information Center

    Zeiser, Kristina; Rickles, Jordan; Garet, Michael S.

    2014-01-01

    To help researchers understand potential issues one can encounter when conducting propensity matching studies in complex settings, this paper describes methodological complications faced when studying schools using deeper learning practices to improve college and career readiness. The study uses data from high schools located in six districts…

  6. Compact sieve-tray distillation column for ammonia-water absorption heat pump: Part 1 -- Design methodology

    SciTech Connect

    Anand, G.; Erickson, D.C.

    1999-07-01

    The distillation column is a key component of ammonia-water absorption units including advanced generator-absorber heat exchange (GAX) cycle heat pumps. The design of the distillation column is critical to unit performance, size, and cost. The distillation column can be designed with random packing, structured packing, or various tray configurations. A sieve-tray distillation column is the least complicated tray design and is less costly than high-efficiency packing. Substantial literature is available on sieve tray design and performance. However, most of the correlations and design recommendations were developed for large industrial hydrocarbon systems and are generally not directly applicable to the compact ammonia-water column discussed here. The correlations were reviewed and modified as appropriate for this application, and a sieve-tray design model was developed. This paper presents the sieve-tray design methodology for highly compact ammonia-water columns. A conceptual design of the distillation column for an 8 ton vapor exchange (VX) GAX heat pump is presented, illustrating relevant design parameters and trends. The design process revealed several issues that have to be investigated experimentally to design the final optimized rectifier. Validation of flooding and weeping limits and tray/point efficiencies are of primary importance.

  7. Design methodology for a confocal imaging system using an objective microlens array with an increased working distance

    PubMed Central

    Choi, Woojae; Shin, Ryung; Lim, Jiseok; Kang, Shinill

    2016-01-01

    In this study, a design methodology for a multi-optical probe confocal imaging system was developed. To develop an imaging system that has the required resolving power and imaging area, this study focused on a design methodology to create a scalable and easy-to-implement confocal imaging system. This system overcomes the limitations of the optical complexities of conventional multi-optical probe confocal imaging systems and the short working distance using a micro-objective lens module composed of two microlens arrays and a telecentric relay optical system. The micro-objective lens module was fabricated on a glass substrate using backside alignment photolithography and thermal reflow processes. To test the feasibility of the developed methodology, an optical system with a resolution of 1 μm/pixel using multi-optical probes with an array size of 10 × 10 was designed and constructed. The developed system provides a 1 mm × 1 mm field of view and a sample scanning range of 100 μm. The optical resolution was evaluated by conducting sample tests using a knife-edge detecting method. The measured lateral resolution of the system was 0.98 μm. PMID:27615370

  8. Design methodology for a confocal imaging system using an objective microlens array with an increased working distance

    NASA Astrophysics Data System (ADS)

    Choi, Woojae; Shin, Ryung; Lim, Jiseok; Kang, Shinill

    2016-09-01

    In this study, a design methodology for a multi-optical probe confocal imaging system was developed. To develop an imaging system that has the required resolving power and imaging area, this study focused on a design methodology to create a scalable and easy-to-implement confocal imaging system. This system overcomes the limitations of the optical complexities of conventional multi-optical probe confocal imaging systems and the short working distance using a micro-objective lens module composed of two microlens arrays and a telecentric relay optical system. The micro-objective lens module was fabricated on a glass substrate using backside alignment photolithography and thermal reflow processes. To test the feasibility of the developed methodology, an optical system with a resolution of 1 μm/pixel using multi-optical probes with an array size of 10 × 10 was designed and constructed. The developed system provides a 1 mm × 1 mm field of view and a sample scanning range of 100 μm. The optical resolution was evaluated by conducting sample tests using a knife-edge detecting method. The measured lateral resolution of the system was 0.98 μm.

  9. Design methodology for a confocal imaging system using an objective microlens array with an increased working distance.

    PubMed

    Choi, Woojae; Shin, Ryung; Lim, Jiseok; Kang, Shinill

    2016-01-01

    In this study, a design methodology for a multi-optical probe confocal imaging system was developed. To develop an imaging system that has the required resolving power and imaging area, this study focused on a design methodology to create a scalable and easy-to-implement confocal imaging system. This system overcomes the limitations of the optical complexities of conventional multi-optical probe confocal imaging systems and the short working distance using a micro-objective lens module composed of two microlens arrays and a telecentric relay optical system. The micro-objective lens module was fabricated on a glass substrate using backside alignment photolithography and thermal reflow processes. To test the feasibility of the developed methodology, an optical system with a resolution of 1 μm/pixel using multi-optical probes with an array size of 10 × 10 was designed and constructed. The developed system provides a 1 mm × 1 mm field of view and a sample scanning range of 100 μm. The optical resolution was evaluated by conducting sample tests using a knife-edge detecting method. The measured lateral resolution of the system was 0.98 μm. PMID:27615370

  10. Design methodology for a confocal imaging system using an objective microlens array with an increased working distance.

    PubMed

    Choi, Woojae; Shin, Ryung; Lim, Jiseok; Kang, Shinill

    2016-09-12

    In this study, a design methodology for a multi-optical probe confocal imaging system was developed. To develop an imaging system that has the required resolving power and imaging area, this study focused on a design methodology to create a scalable and easy-to-implement confocal imaging system. This system overcomes the limitations of the optical complexities of conventional multi-optical probe confocal imaging systems and the short working distance using a micro-objective lens module composed of two microlens arrays and a telecentric relay optical system. The micro-objective lens module was fabricated on a glass substrate using backside alignment photolithography and thermal reflow processes. To test the feasibility of the developed methodology, an optical system with a resolution of 1 μm/pixel using multi-optical probes with an array size of 10 × 10 was designed and constructed. The developed system provides a 1 mm × 1 mm field of view and a sample scanning range of 100 μm. The optical resolution was evaluated by conducting sample tests using a knife-edge detecting method. The measured lateral resolution of the system was 0.98 μm.

  11. Design methodology for compact photonic-crystal-based wavelength division multiplexers.

    PubMed

    Liu, Victor; Jiao, Yang; Miller, David A B; Fan, Shanhui

    2011-02-15

    We present an extremely compact wavelength division multiplexer design, as well as a general framework for designing and optimizing frequency selective devices embedded in photonic crystals satisfying arbitrary design constraints. Our method is based on the Dirichlet-to-Neumman simulation method and uses low rank updates to the system to efficiently scan through many device designs.

  12. Progress in integrated-circuit horn antennas for receiver applications. Part 1: Antenna design

    NASA Technical Reports Server (NTRS)

    Eleftheriades, George V.; Ali-Ahmad, Walid Y.; Rebeiz, Gabriel M.

    1992-01-01

    The purpose of this work is to present a systematic method for the design of multimode quasi-integrated horn antennas. The design methodology is based on the Gaussian beam approach and the structures are optimized for achieving maximum fundamental Gaussian coupling efficiency. For this purpose, a hybrid technique is employed in which the integrated part of the antennas is treated using full-wave analysis, whereas the machined part is treated using an approximate method. This results in a simple and efficient design process. The developed design procedure has been applied for the design of a 20, a 23, and a 25 dB quasi-integrated horn antennas, all with a Gaussian coupling efficiency exceeding 97 percent. The designed antennas have been tested and characterized using both full-wave analysis and 90 GHz/370 GHz measurements.

  13. Grounded Theory as a Methodology to Design Teaching Strategies for Historically Informed Musical Performance

    ERIC Educational Resources Information Center

    Mateos-Moreno, Daniel; Alcaraz-Iborra, Mario

    2013-01-01

    Our work highlights the necessity of revising the materials employed in instrumental education, which are systematically based on a progressive development of technical abilities and, though only transversely, without a structured sequence of contents, on issues referring to the interpretation of different periods and styles. In order to elaborate…

  14. An optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Joshi, Suresh M.; Armstrong, Ernest S.

    1993-01-01

    An approach for an optimization-based integrated controls-structures design is presented for a class of flexible spacecraft that require fine attitude pointing and vibration suppression. The integrated design problem is posed in the form of simultaneous optimization of both structural and control design variables. The approach is demonstrated by application to the integrated design of a generic space platform and to a model of a ground-based flexible structure. The numerical results obtained indicate that the integrated design approach can yield spacecraft designs that have substantially superior performance over a conventional design wherein the structural and control designs are performed sequentially. For example, a 40-percent reduction in the pointing error is observed along with a slight reduction in mass, or an almost twofold increase in the controlled performance is indicated with more than a 5-percent reduction in the overall mass of the spacecraft (a reduction of hundreds of kilograms).

  15. The dementia and disability project in Thai elderly: rational, design, methodology and early results

    PubMed Central

    2013-01-01

    were free of chronic diseases. Treatment gap (indicating those who have untreated or inadequate treatment) of diabetes mellitus and hypertension in Thai elders in this study was 37% and 55.5% respectively. 62.6% of Thai elders have ApoE3E3 allele. Prevalence of positive ApoE4 gene in this study is 22.85%. 38.6% of Thai elders who had MRI brain study have moderate to severe white matter lesions. Conclusion The large and comprehensive set of measurements in DDP allows a wide-ranging explanation of the functional and clinical features to be investigated in relation to white matter lesions or cortical atrophy of the brain in Thai elderly population. An almost 2 year follow up was made available to those with MCI and dementia and some of the cognitively normal elderly. The longitudinal design will provide great understanding of the possible contributors to disability in the elderly and to the progression of cognitive decline in Thai elders. PMID:23305293

  16. Progressive Development of Groundwater Sources in Dryland Basins with Vague Hydro-geological Information - Methodology and Examples from the Middle East

    NASA Astrophysics Data System (ADS)

    Adar, E. M.; Issar, A. S.

    2012-04-01

    The history of Middle East was influenced by past global climatic changes. Warm periods caused droughts, which brought desertification, migrations and wars. Cold periods were humid and brought abundance and the settling of the deserts' fringes. The forecast based on this correlation is that the present global warming will cause the drying up of the Middle East. Like in the past, the mitigating of this negative impact should be by the utilization of the long-term storage of the groundwater resources. This will involve deep drilling and pumping and modern irrigation methods in the framework of a new policy of "Progressive Development", which will entail the utilization of the up-till-now, undeveloped natural water resources beyond that of present water replenishment. While the utilization of the one-time groundwater reserves is taking place a master long term comprehensive progressive development plan for the Middle East will be prepared. The Progressive Development methodology infers the step by step development of all existing water resources like treated effluents, desalinated brackish groundwater and at the end desalination of seawater. Key words: climate change, desertification, groundwater, irrigation, desalination

  17. Discovering Actionable Knowledge about Community Telecommunications Systems: Concepts and Case Applications of Design Studio Methodology.

    ERIC Educational Resources Information Center

    Wells, Kimberly; Horan, Thomas

    2001-01-01

    Explores the potential utility of design principles and the design studio for enacting information infrastructures essential for individuals and communities to thrive in the knowledge economy. The proposed approach, Digital Places Design, furnishes those engaged in planning and managing telecommunications infrastructure with a tool to realize…

  18. Applying Item Response Theory methods to design a learning progression-based science assessment

    NASA Astrophysics Data System (ADS)

    Chen, Jing

    Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all

  19. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-01

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  20. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-01

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source. PMID:27505823

  1. A Sizing Methodology for the Conceptual Design of Blended-Wing-Body Transports. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Kimmel, William M. (Technical Monitor); Bradley, Kevin R.

    2004-01-01

    This paper describes the development of a methodology for sizing Blended-Wing-Body (BWB) transports and how the capabilities of the Flight Optimization System (FLOPS) have been expanded using that methodology. In this approach, BWB transports are sized based on the number of passengers in each class that must fit inside the centerbody or pressurized vessel. Weight estimation equations for this centerbody structure were developed using Finite Element Analysis (FEA). This paper shows how the sizing methodology has been incorporated into FLOPS to enable the design and analysis of BWB transports. Previous versions of FLOPS did not have the ability to accurately represent or analyze BWB configurations in any reliable, logical way. The expanded capabilities allow the design and analysis of a 200 to 450-passenger BWB transport or the analysis of a BWB transport for which the geometry is already known. The modifications to FLOPS resulted in differences of less than 4 percent for the ramp weight of a BWB transport in this range when compared to previous studies performed by NASA and Boeing.

  2. Toward a methodology of withdrawal designs for the assessment of response maintenance.

    PubMed Central

    Rusch, F R; Kazdin, A E

    1981-01-01

    Single-case experimental designs have advanced considerably in the evaluation of functional relationships between interventions and behavior change. The systematic investigation of response maintenance once intervention effects have been demonstrated has, however, received relatively little attention. The lack of research on maintenance may stem in part from the paucity of design options that systematically evaluate factors that contribute to maintenance. The present paper discusses three design options potentially useful for the investigation of response maintenance. These include: (a) the sequential-withdrawal, (b) the partial-withdrawal, and (c) the partial-sequential withdrawal designs. Each design is illustrated and potential limitations are discussed. PMID:7287597

  3. Application of modern control design methodology to oblique wing research aircraft

    NASA Technical Reports Server (NTRS)

    Vincent, James H.

    1991-01-01

    A Linear Quadratic Regulator synthesis technique was used to design an explicit model following control system for the Oblique Wing Research Aircraft (OWRA). The forward path model (Maneuver Command Generator) was designed to incorporate the desired flying qualities and response decoupling. The LQR synthesis was based on the use of generalized controls, and it was structured to provide a proportional/integral error regulator with feedforward compensation. An unexpected consequence of this design approach was the ability to decouple the control synthesis into separate longitudinal and lateral directional designs. Longitudinal and lateral directional control laws were generated for each of the nine design flight conditions, and gain scheduling requirements were addressed. A fully coupled 6 degree of freedom open loop model of the OWRA along with the longitudinal and lateral directional control laws was used to assess the closed loop performance of the design. Evaluations were performed for each of the nine design flight conditions.

  4. Methodology for the conceptual design of a robust and opportunistic system-of-systems

    NASA Astrophysics Data System (ADS)

    Talley, Diana Noonan

    Systems are becoming more complicated, complex, and interrelated. Designers have recognized the need to develop systems from a holistic perspective and design them as Systems-of-Systems (SoS). The design of the SoS, especially in the conceptual design phase, is generally characterized by significant uncertainty. As a result, it is possible for all three types of uncertainty (aleatory, epistemic, and error) and the associated factors of uncertainty (randomness, sampling, confusion, conflict, inaccuracy, ambiguity, vagueness, coarseness, and simplification) to affect the design process. While there are a number of existing SoS design methods, several gaps have been identified: the ability to modeling all of the factors of uncertainty at varying levels of knowledge; the ability to consider both the pernicious and propitious aspects of uncertainty; and, the ability to determine the value of reducing the uncertainty in the design process. While there are numerous uncertainty modeling theories, no one theory can effectively model every kind of uncertainty. This research presents a Hybrid Uncertainty Modeling Method (HUMM) that integrates techniques from the following theories: Probability Theory, Evidence Theory, Fuzzy Set Theory, and Info-Gap theory. The HUMM is capable of modeling all of the different factors of uncertainty and can model the uncertainty for multiple levels of knowledge. In the design process, there are both pernicious and propitious characteristics associated with the uncertainty. Existing design methods typically focus on developing robust designs that are insensitive to the associated uncertainty. These methods do not capitalize on the possibility of maximizing the potential benefit associated with the uncertainty. This research demonstrates how these deficiencies can be overcome by identifying the most robust and opportunistic design. In a design process it is possible that the most robust and opportunistic design will not be selected from the set

  5. Games and Diabetes: A Review Investigating Theoretical Frameworks, Evaluation Methodologies, and Opportunities for Design Grounded in Learning Theories.

    PubMed

    Lazem, Shaimaa; Webster, Mary; Holmes, Wayne; Wolf, Motje

    2015-09-02

    Here we review 18 articles that describe the design and evaluation of 1 or more games for diabetes from technical, methodological, and theoretical perspectives. We undertook searches covering the period 2010 to May 2015 in the ACM, IEEE, Journal of Medical Internet Research, Studies in Health Technology and Informatics, and Google Scholar online databases using the keywords "children," "computer games," "diabetes," "games," "type 1," and "type 2" in various Boolean combinations. The review sets out to establish, for future research, an understanding of the current landscape of digital games designed for children with diabetes. We briefly explored the use and impact of well-established learning theories in such games. The most frequently mentioned theoretical frameworks were social cognitive theory and social constructivism. Due to the limitations of the reported evaluation methodologies, little evidence was found to support the strong promise of games for diabetes. Furthermore, we could not establish a relation between design features and the game outcomes. We argue that an in-depth discussion about the extent to which learning theories could and should be manifested in the design decisions is required.

  6. Games and Diabetes: A Review Investigating Theoretical Frameworks, Evaluation Methodologies, and Opportunities for Design Grounded in Learning Theories.

    PubMed

    Lazem, Shaimaa; Webster, Mary; Holmes, Wayne; Wolf, Motje

    2016-03-01

    Here we review 18 articles that describe the design and evaluation of 1 or more games for diabetes from technical, methodological, and theoretical perspectives. We undertook searches covering the period 2010 to May 2015 in the ACM, IEEE, Journal of Medical Internet Research, Studies in Health Technology and Informatics, and Google Scholar online databases using the keywords "children," "computer games," "diabetes," "games," "type 1," and "type 2" in various Boolean combinations. The review sets out to establish, for future research, an understanding of the current landscape of digital games designed for children with diabetes. We briefly explored the use and impact of well-established learning theories in such games. The most frequently mentioned theoretical frameworks were social cognitive theory and social constructivism. Due to the limitations of the reported evaluation methodologies, little evidence was found to support the strong promise of games for diabetes. Furthermore, we could not establish a relation between design features and the game outcomes. We argue that an in-depth discussion about the extent to which learning theories could and should be manifested in the design decisions is required. PMID:26337753

  7. The research progress on Hodograph Method of aerodynamic design at Tsinghua University

    NASA Technical Reports Server (NTRS)

    Chen, Zuoyi; Guo, Jingrong

    1991-01-01

    Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.

  8. Duct injection technology prototype development: Scale-up methodology and engineering design criteria

    SciTech Connect

    Not Available

    1991-04-01

    The objective of the Duct Injection Technology Prototype Development project is to develop a sound design basis for applying duct injection technology as a post-combustion SO{sub 2} emissions control method to existing, pre-NSPS, coal-fired power plants. This report is divided into five major topics: (1) design criteria; (2) engineering drawings; (3) equipment sizing and design; (4) plant and equipment arrangement considerations; and (5) equipment bid specification guidelines.

  9. The theory and methodology of capturing and representing the design process and its application to the task of rapid redesign

    NASA Astrophysics Data System (ADS)

    Nii, Kendall M.

    The paradigm under which engineering design is being performed in the Aerospace industry is changing. There is an increased emphasis on a "faster, better, and cheaper" way of doing business. Designers are tasked with developing a better product, in a shorter time, with less money. Engineers are continually trying to improve their products, lower their costs, and reduce their schedules. So at first glance, it might seem difficult if not impossible to perform these three tasks simultaneously and attempt to achieve order of magnitude improvements in each area. Indeed it might well be impossible for an engineer using only traditional tools and techniques. However, there is a new tool, known as design capture, available to the designer. A design capture system, can aid the designer in a variety of ways. One specific use for a design capture system is to aid the designer in performing rapid redesign. This thesis presents a new methodology for a Design Capture System (DCS) which can aid the designer with performing rapid redesign. The Design Capture for Rapid Redesign (DCARRD) method facilitates rapid redesign in three ways: it allows the designer to assess the impact of changing an initial requirement, it allows the designer to assess the impact of changing a decision, and it enhances the ability of the designer to assess the impact of a completely new requirement. The DCARRD method was implemented into an html-based design capture system accessible through a Web browser. This implementation demonstrates the feasibility of the DCARRD method. The most important features of DCARRD are that it is focused an performing rapid redesign, it places the design decisions within the framework of the design process, it is simple to use and implement, and it has the ability to track subsystem baselines. The many complex issues surrounding testing of design tools in general, and DCARRD in particular, are discussed at length. There are a number of complex issues which must be addressed

  10. Design and application of complementary educational resources for self-learning methodology

    NASA Astrophysics Data System (ADS)

    Andrés Gilarranz Casado, Carlos; Rodriguez-Sinobas, Leonor

    2016-04-01

    The main goal of this work is enhanced the student`s self-learning in subjects regarding irrigation and its technology. Thus, the use of visual media (video recording) during the lectures (master classes and practicum) will help the students in understanding the scope of the course since they can watch the recorded material at any time and as many times they wish. The study comprised two parts. In the first, lectures were video filmed inside the classroom during one semester (16 weeks and four hours per week) in the course "Irrigation Systems and Technology" which is taught at the Technical University of Madrid. In total, 200 videos, approximated 12 min long, were recorded. Since the You tube platform is a worldwide platform and since it is commonly used by students and professors, the videos were uploaded in it. Then, the URL was inserted in the Moodle platform which contains the materials for the course. In the second part, the videos were edited and formatted. Special care was taking to maintain image and audio quality. Finally, thirty videos were developed which focused on the different main areas of the course and containing a clear and brief explanation of their basis. Each video lasted between 30 and 45 min Finally, a survey was handled at the end of the semester in order to assess the students' opinion about the methodology. In the questionnaire, the students highlighted the key aspects during the learning process and in general, they were very satisfied with the methodology.

  11. Optical binary de Bruijn networks for massively parallel computing: design methodology and feasibility study

    NASA Astrophysics Data System (ADS)

    Louri, Ahmed; Sung, Hongki

    1995-10-01

    The interconnection network structure can be the deciding and limiting factor in the cost and the performance of parallel computers. One of the most popular point-to-point interconnection networks for parallel computers today is the hypercube. The regularity, logarithmic diameter, symmetry, high connectivity, fault tolerance, simple routing, and reconfigurability (easy embedding of other network topologies) of the hypercube make it a very attractive choice for parallel computers. Unfortunately the hypercube possesses a major drawback, which is the links per node increases as the network grows in size. As an alternative to the hypercube, the binary de Bruijn (BdB) network has recently received much attention. The BdB not only provides a logarithmic diameter, fault tolerance, and simple routing but also requires fewer links than the hypercube for the same network size. Additionally, a major advantage of the BdB edges per node is independent of the network size. This makes it very desirable for large-scale parallel systems. However, because of its asymmetrical nature and global connectivity, it poses a major challenge for VLSI technology. Optics, owing to its three-dimensional and global-connectivity nature, seems to be very suitable for implementing BdB networks. We present an implementation methodology for optical BdB networks. The distinctive feature of the proposed implementation methodology is partitionability of the network into a few primitive operations that can be implemented efficiently. We further show feasibility of the

  12. Application of the MIAS methodology in design of the data acquisition system for wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Ćwikła, G.; Krenczyk, D.; Kampa, A.; Gołda, G.

    2015-11-01

    This paper presents application of MIAS (Manufacturing Information Acquisition System) methodology to develop customized data acquisition system supporting management of the Central Wastewater Treatment Plant (CWWTP) in Gliwice, Poland, being example of production systems leading continuous flow, automated production processes. Access to current data on the state of production system is a key to efficient management of a company, allowing fast reaction or even anticipation of future problems with equipment and reduction of waste. Overview of both analysis and synthesis of organisational solutions, data sources, data pre-processing and communication interfaces, realised according to proposed MIAS methodology, had been presented. The stage of analysis covered i.e.: organisational structure of the company, IT systems used in the company, specifics of technological processes, machines and equipment, structure of control systems, assignments of crew members, materials used in the technological processes. This paper also presents results of the stage of synthesis of technical and organisational solutions of MIAS for CWWTP, including proposed solutions covering MIAS architecture and connections with other IT systems, data sources in production system that are currently available and newly created, data preprocessing procedures, and necessary communication interfaces.

  13. SEISMIC DESIGN REQUIREMENTS SELECTION METHODOLOGY FOR THE SLUDGE TREATMENT & M-91 SOLID WASTE PROCESSING FACILITIES PROJECTS

    SciTech Connect

    RYAN GW

    2008-04-25

    In complying with direction from the U.S. Department of Energy (DOE), Richland Operations Office (RL) (07-KBC-0055, 'Direction Associated with Implementation of DOE-STD-1189 for the Sludge Treatment Project,' and 08-SED-0063, 'RL Action on the Safety Design Strategy (SDS) for Obtaining Additional Solid Waste Processing Capabilities (M-91 Project) and Use of Draft DOE-STD-I 189-YR'), it has been determined that the seismic design requirements currently in the Project Hanford Management Contract (PHMC) will be modified by DOE-STD-1189, Integration of Safety into the Design Process (March 2007 draft), for these two key PHMC projects. Seismic design requirements for other PHMC facilities and projects will remain unchanged. Considering the current early Critical Decision (CD) phases of both the Sludge Treatment Project (STP) and the Solid Waste Processing Facilities (M-91) Project and a strong intent to avoid potentially costly re-work of both engineering and nuclear safety analyses, this document describes how Fluor Hanford, Inc. (FH) will maintain compliance with the PHMC by considering both the current seismic standards referenced by DOE 0 420.1 B, Facility Safety, and draft DOE-STD-1189 (i.e., ASCE/SEI 43-05, Seismic Design Criteria for Structures, Systems, and Components in Nuclear Facilities, and ANSI!ANS 2.26-2004, Categorization of Nuclear Facility Structures, Systems and Components for Seismic Design, as modified by draft DOE-STD-1189) to choose the criteria that will result in the most conservative seismic design categorization and engineering design. Following the process described in this document will result in a conservative seismic design categorization and design products. This approach is expected to resolve discrepancies between the existing and new requirements and reduce the risk that project designs and analyses will require revision when the draft DOE-STD-1189 is finalized.

  14. MODeLeR: A Virtual Constructivist Learning Environment and Methodology for Object-Oriented Design

    ERIC Educational Resources Information Center

    Coffey, John W.; Koonce, Robert

    2008-01-01

    This article contains a description of the organization and method of use of an active learning environment named MODeLeR, (Multimedia Object Design Learning Resource), a tool designed to facilitate the learning of concepts pertaining to object modeling with the Unified Modeling Language (UML). MODeLeR was created to provide an authentic,…

  15. Theories and Research Methodologies for Design-Based Implementation Research: Examples from Four Cases

    ERIC Educational Resources Information Center

    Russell, Jennifer Lin; Jackson, Kara; Krumm, Andrew E.; Frank, Kenneth A.

    2013-01-01

    Design-Based Implementation Research is the process of engaging "learning scientists, policy researchers, and practitioners in a model of collaborative, iterative, and systematic research and development" designed to address persistent problems of teaching and learning. Addressing persistent problems of teaching and learning requires…

  16. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  17. The conceptual development of a methodology for solving multi-objective hierarchical thermal design problems

    SciTech Connect

    Bascaran, E.; Bannerot, R.; Mistree, F.

    1987-01-01

    The design of thermal systems is complicated by changing operating conditions, the large number of alternatives, the strong dependance of thermal properties on temperature and pressure and sometimes the lack of a good understanding of the basic phenomena involved. A conceptual development is presented for organizing multi-objective hierarchical thermal design problems into a series of decision support problems which are compatible and solvable with DSIDES, a software system that is under development in the Systems Design Laboratory in the Department of Mechanical Engineering at the University of Houston. The software is currently being used to support the design of a variety of mechanical design problems including ships and airplanes. In this paper, a hierarchical coupled thermal problem is presented and solved by way of example.

  18. Challenges in designing, conducting, and reporting oral health behavioral intervention studies in primary school age children: methodological issues

    PubMed Central

    Cooper, Anna Mary; Coffey, Margaret; Dugdill, Lindsey

    2014-01-01

    Often within oral health, clinical outcome measures dominate trial design rather than behavioral outcome measures, and often there is a reliance on proxy self-reporting of children’s behavior with no corroboration through triangulation of measures. The complexity of the interventions involved in oral health intervention is often overlooked in trial design, and more flexible pragmatic designs that take account of the research context may be more appropriate. Some of the limitations in oral health behavioral intervention studies (trials) in primary school age children were reported in a recently published Cochrane review. This paper aims to critically discuss the findings of a recent Cochrane review in terms of the methodological implications that arise for future design, development, measurement, and reporting of oral health trials in primary school age children. Key components of the UK Medical Research Council’s framework for the design and evaluation of complex interventions are discussed in relation to using taxonomies of behavior change. This paper is not designed to be a definitive guide but aims to bring learning from other areas of public health and health promotion into dental public health. Ultimately, the aim is to aid the design of more successful interventions that produce long-term behavioral changes in children in relation to toothbrushing and nighttime sugar snacking. PMID:27774028

  19. The inclusion of ergonomic tools in the informational, conceptual and preliminary phases of the product design methodology.

    PubMed

    Medeiros, Ivan Luiz de; Batiz, Eduardo Concepción

    2012-01-01

    The process of product development has received special attention as it is being recognized as a source of competitive gain. Through its systematic use companies reduce costs, increase quality and decrease development time. However, one can find products being launched on the market that cause dissatisfaction to its users, and in consequence if the customer feels harmed or injured he will no longer purchase a product from the same brand. This in regard only to the commercial aspect; usually the danger of an accident or injury is not even thought about. This paper is the basis of the dissertation master's degree and used a literature research to build the repertoire, analyzing the methodologies applied by product design engineers, designers and ergonomists. The analysis results demonstrate the inefficiency of the design methodologies ergonomic issues. The contribution of this work lies in the suggestion to include ergonomic tools in all phases of product development and the presentation of a table with the tools that points out its most suitable time of application and results.

  20. Mixing design for enzymatic hydrolysis of sugarcane bagasse: methodology for selection of impeller configuration.

    PubMed

    Corrêa, Luciano Jacob; Badino, Alberto Colli; Cruz, Antonio José Gonçalves

    2016-02-01

    One of the major process bottlenecks for viable industrial production of second generation ethanol is related with technical-economic difficulties in the hydrolysis step. The development of a methodology to choose the best configuration of impellers towards improving mass transfer and hydrolysis yield together with a low power consumption is important to make the process cost-effective. In this work, four dual impeller configurations (DICs) were evaluated during hydrolysis of sugarcane bagasse (SCB) experiments in a stirred tank reactor (3 L). The systems tested were dual Rushton turbine impellers (DIC1), Rushton and elephant ear (down-pumping) turbines (DIC2), Rushton and elephant ear (up-pumping) turbines (DIC3), and down-pumping and up-pumping elephant ear turbines (DIC4). The experiments were conducted during 96 h, using 10 % (m/v) SCB, pH 4.8, 50 °C, 10 FPU/g biomass, 470 rpm. The mixing time was successfully used as the characteristic parameter to select the best impeller configuration. Rheological parameters were determined using a rotational rheometer, and the power consumptions of the four DICs were on-line measured with a dynamometer. The values obtained for the energetic efficiency (the ratio between the cellulose to glucose conversion and the total energy) showed that the proposed methodology was successful in choosing a suitable configuration of impellers, wherein the DIC4 obtained approximately three times higher energetic efficiency than DIC1. Furthermore a scale-up protocol (factor scale-up 1000) for the enzymatic hydrolysis reactor was proposed. PMID:26650719

  1. Mixing design for enzymatic hydrolysis of sugarcane bagasse: methodology for selection of impeller configuration.

    PubMed

    Corrêa, Luciano Jacob; Badino, Alberto Colli; Cruz, Antonio José Gonçalves

    2016-02-01

    One of the major process bottlenecks for viable industrial production of second generation ethanol is related with technical-economic difficulties in the hydrolysis step. The development of a methodology to choose the best configuration of impellers towards improving mass transfer and hydrolysis yield together with a low power consumption is important to make the process cost-effective. In this work, four dual impeller configurations (DICs) were evaluated during hydrolysis of sugarcane bagasse (SCB) experiments in a stirred tank reactor (3 L). The systems tested were dual Rushton turbine impellers (DIC1), Rushton and elephant ear (down-pumping) turbines (DIC2), Rushton and elephant ear (up-pumping) turbines (DIC3), and down-pumping and up-pumping elephant ear turbines (DIC4). The experiments were conducted during 96 h, using 10 % (m/v) SCB, pH 4.8, 50 °C, 10 FPU/g biomass, 470 rpm. The mixing time was successfully used as the characteristic parameter to select the best impeller configuration. Rheological parameters were determined using a rotational rheometer, and the power consumptions of the four DICs were on-line measured with a dynamometer. The values obtained for the energetic efficiency (the ratio between the cellulose to glucose conversion and the total energy) showed that the proposed methodology was successful in choosing a suitable configuration of impellers, wherein the DIC4 obtained approximately three times higher energetic efficiency than DIC1. Furthermore a scale-up protocol (factor scale-up 1000) for the enzymatic hydrolysis reactor was proposed.

  2. IODC98 optical design problem: method of progressing from an ahromatic to an apochromatic design

    SciTech Connect

    Seppala, L.G.

    1998-07-20

    A general method of designing an apochromatic lens by using a triplet of special glasses, in which the buried surfaces concept is used, can be outlined. First, one initially chooses a starting point which is already achromatic. Second, a thick plate or shell is added to the design, where the plate or shell has an index of refraction 1.62, which is similar to the special glass triplet average index of refraction (for example: PSK53A, KZFS1 and TIF6). Third, the lens is then reoptimized to an achromatic design. Fourth, the single element is replace by the special glass triplet. Fifth, only the internal surfaces of the triplet are varied to correct all three wavelengths. Although this step will produce little improvement, it does serve to stabilize further optimization. Sixth and finally, all potential variables are used to fully optimize the apochromatic lens. Microscope objectives, for example, could be designed using this technique. The important concept to apply is the use of multiple buried surfaces in which each interface involves a special glass, after an achromatic design has been achieved. This extension relieves the restriction that all special glasses have a common index of refraction and allows a wider variety of special glasses to be used. However, it is still desirable to use glasses which form a large triangle on the P versus V diagram.

  3. Methodology for the structural design of single spoke accelerating cavities at Fermilab

    NASA Astrophysics Data System (ADS)

    Passarelli, Donato; Wands, Robert H.; Merio, Margherita; Ristori, Leonardo

    2016-10-01

    Fermilab is planning to upgrade its accelerator complex to deliver a more powerful and intense proton-beam for neutrino experiments. In the framework of the so-called Proton Improvement Plan-II (PIP-II), we are designing and developing a cryomodule containing superconducting accelerating cavities, the Single Spoke Resonators of type 1 (SSR1). In this paper, we present the sequence of analysis and calculations performed for the structural design of these cavities, using the rules of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC). The lack of an accepted procedure for addressing the design, fabrication, and inspection of such unique pressure vessels makes the task demanding and challenging every time. Several factors such as exotic materials, unqualified brazing procedures, limited nondestructive examination, and the general R&D nature of these early generations of cavity design, conspire to make it impractical to obtain full compliance with all ASME BPVC requirements. However, the presented approach allowed us to validate the design of this new generation of single spoke cavities with values of maximum allowable working pressure that exceeds the safety requirements. This set of rules could be used as a starting point for the structural design and development of similar objects.

  4. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  5. Crossing Active Faults on the Sakhalin II Onshore Pipeline Route: Analysis Methodology and Basic Design

    SciTech Connect

    Vitali, Luigino; Mattiozzi, Pierpaolo

    2008-07-08

    Twin oil (20 and 24 inch) and gas (20 and 48 inch) pipeline systems stretching 800 km are being constructed to connect offshore hydrocarbon deposits from the Sakhalin II concession in the North to an LNG plant and oil export terminal in the South of Sakhalin island. The onshore pipeline route follows a regional fault zone and crosses individual active faults at 19 locations. Sakhalin Energy, Design and Construction companies took significant care to ensure the integrity of the pipelines, should large seismic induced ground movements occur during the Operational life of the facilities. Complex investigations including the identification of the active faults, their precise location, their particular displacement values and assessment of the fault kinematics were carried out to provide input data for unique design solutions. Lateral and reverse offset displacements of 5.5 and 4.5 m respectively were determined as the single-event values for the design level earthquake (DLE)--the 1000-year return period event. Within the constraints of a pipeline route largely fixed, the underground pipeline fault crossing design was developed to define the optimum routing which would minimize stresses and strain using linepipe materials which had been ordered prior to the completion of detailed design, and to specify requirements for pipe trenching shape, materials, drainage system, etc. This Paper describes the steps followed to formulate the concept of the special trenches and the analytical characteristics of the Model.

  6. Methodology for the structural design of single spoke accelerating cavities at Fermilab

    DOE PAGESBeta

    Passarelli, Donato; Wands, Robert H.; Merio, Margherita; Ristori, Leonardo

    2016-10-01

    Fermilab is planning to upgrade its accelerator complex to deliver a more powerful and intense proton-beam for neutrino experiments. In the framework of the so-called Proton Improvement Plan-II (PIP-II), we are designing and developing a cryomodule containing superconducting accelerating cavities, the Single Spoke Resonators of type 1 (SSR1). In this paper, we present the sequence of analysis and calculations performed for the structural de- sign of these cavities, using the rules of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC). The lack of an accepted procedure for addressing the design, fabrication, and inspection of suchmore » unique pressure vessels makes the task demanding and challenging every time. Several factors such as exotic materials, unqualified brazing procedures, limited nondestructive examination, and the general R&D nature of these early generations of cavity design, conspire to make it impractical to obtain full compliance with all ASME BPVC requirements. However, the presented approach allowed us to validate the design of these new generation of single spoke cavities with values of maximum allowable working pressure that exceed the safety requirements. This set of rules could be used as a starting point for the structural design and development of similar objects.« less

  7. Crossing Active Faults on the Sakhalin II Onshore Pipeline Route: Analysis Methodology and Basic Design

    NASA Astrophysics Data System (ADS)

    Vitali, Luigino; Mattiozzi, Pierpaolo

    2008-07-01

    Twin oil (20 & 24 inch) and gas (20 & 48 inch) pipeline systems stretching 800 km are being constructed to connect offshore hydrocarbon deposits from the Sakhalin II concession in the North to an LNG plant and oil export terminal in the South of Sakhalin island. The onshore pipeline route follows a regional fault zone and crosses individual active faults at 19 locations. Sakhalin Energy, Design and Construction companies took significant care to ensure the integrity of the pipelines, should large seismic induced ground movements occur during the Operational life of the facilities. Complex investigations including the identification of the active faults, their precise location, their particular displacement values and assessment of the fault kinematics were carried out to provide input data for unique design solutions. Lateral and reverse offset displacements of 5.5 and 4.5 m respectively were determined as the single-event values for the design level earthquake (DLE)—the 1000-year return period event. Within the constraints of a pipeline route largely fixed, the underground pipeline fault crossing design was developed to define the optimum routing which would minimize stresses and strain using linepipe materials which had been ordered prior to the completion of detailed design, and to specify requirements for pipe trenching shape, materials, drainage system, etc. This Paper describes the steps followed to formulate the concept of the special trenches and the analytical characteristics of the Model.

  8. Modeling Methodologies for Design and Control of Solid Oxide Fuel Cell APUs

    NASA Astrophysics Data System (ADS)

    Pianese, C.; Sorrentino, M.

    2009-08-01

    Among the existing fuel cell technologies, Solid Oxide Fuel Cells (SOFC) are particularly suitable for both stationary and mobile applications, due to their high energy conversion efficiencies, modularity, high fuel flexibility, low emissions and noise. Moreover, the high working temperatures enable their use for efficient cogeneration applications. SOFCs are entering in a pre-industrial era and a strong interest for designing tools has growth in the last years. Optimal system configuration, components sizing, control and diagnostic system design require computational tools that meet the conflicting needs of accuracy, affordable computational time, limited experimental efforts and flexibility. The paper gives an overview on control-oriented modeling of SOFC at both single cell and stack level. Such an approach provides useful simulation tools for designing and controlling SOFC-APUs destined to a wide application area, ranging from automotive to marine and airplane APUs.

  9. New market, new challenge, new opportunity (1)--overview of China rural healthcare & design methodology.

    PubMed

    Jiehui, Jiang; Kandachar, Prabhu

    2008-01-01

    China has a largest population in the world (1.3Billion) and 0.9 Billion is rural population. Most of rural people earn less than US$3/day, and they are called 'Base of the economic pyramid (BoP)'. Compared with high level market, BoP is a new market, which means a low individual profit, but a large population. This paper discusses the healthcare issues in rural China (BoP) and study their healthcare needs through field study and case studies. This research is carried out within the framework of 'Design for Sustainability at Base-of-the-Pyramid (BoP)' programme of the School of Industrial Design Engineering at Delft University of Technology. And the aim of this research is to provide a low cost advanced healthcare product design, which will meet poor's needs and create a business case for commercial partners.

  10. Verification of SMART Neutronics Design Methodology by the MCNAP Monte Carlo Code

    SciTech Connect

    Jong Sung Chung; Kyung Jin Shim; Chang Hyo Kim; Chungchan Lee; Sung Quun Zee

    2000-11-12

    SMART is a small advanced integral pressurized water reactor (PWR) of 330 MW(thermal) designed for both electricity generation and seawater desalinization. The CASMO-3/MASTER nuclear analysis system, a design-basis of Korean PWR plants, has been employed for the SMART core nuclear design and analysis because the fuel assembly (FA) characteristics and reactor operating conditions in temperature and pressure are similar to those of PWR plants. However, the SMART FAs are highly poisoned with more than 20 Al{sub 2}O{sub 3}-B{sub 4}C plus additional Gd{sub 2}O{sub 3}/UO{sub 2} BPRs each FA. The reactor is operated with control rods inserted. Therefore, the flux and power distribution may become more distorted than those of commercial PWR plants. In addition, SMART should produce power from room temperature to hot-power operating condition because it employs nuclear heating from room temperature. This demands reliable predictions of core criticality, shutdown margin, control rod worth, power distributions, and reactivity coefficients at both room temperature and hot operating condition, yet no such data are available to verify the CASMO-3/MASTER (hereafter MASTER) code system. In the absence of experimental verification data for the SMART neutronics design, the Monte Carlo depletion analysis program MCNAP is adopted as near-term alternatives for qualifying MASTER neutronics design calculations. The MCNAP is a personal computer-based continuous energy Monte Carlo neutronics analysis program written in C++ language. We established its qualification by presenting its prediction accuracy on measurements of Venus critical facilities and core neutronics analysis of a PWR plant in operation, and depletion characteristics of integral burnable absorber FAs of the current PWR. Here, we present a comparison of MASTER and MCNAP neutronics design calculations for SMART and establish the qualification of the MASTER system.

  11. Theoretical and experimental investigation of design for multioptical-axis freeform progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Xiang, HuaZhong; Chen, JiaBi; Zhu, TianFen; Wei, YeFei; Fu, DongXiang

    2015-11-01

    A freeform progressive addition lens (PAL) provides a good solution to correct presbyopia and prevent juvenile myopia by distributing pupils' optical powers of distance zone, near zone, and intermediate zone and is more widely adopted in the present optometric study. However, there is still a lack of a single-optical-axis system for the design of a PAL. This paper focuses on the research for an approach for designing a freeform PAL. A multioptical-axis system based on real viewing conditions using the eyes is employed for the representation of the freeform surface. We filled small pupils in the intermediate zone as a progressive corridor and the distance- and near-vision portions were defined as the standard spherical surfaces delimited by quadratic curves. Three freeform PALs with a spherical surface as the front side and a freeform surface as the backside were designed. We demonstrate the fabrication and measurement technologies for the PAL surface using computer numerical control machine tools from Schneider Smart and a Visionix VM-2000 Lens Power Mapper. Surface power and astigmatic values were obtained. Preliminary results showed that the approach for the design and fabrication is helpful to advance the design procedure optimization and mass production of PALs in optometry.

  12. Rational Design of Methodology-Independent Metal Parameters Using a Nonbonded Dummy Model.

    PubMed

    Jiang, Yang; Zhang, Haiyang; Tan, Tianwei

    2016-07-12

    A nonbonded dummy model for metal ions is highly imperative for the computation of complex biological systems with for instance multiple metal centers. Here we present nonbonded dummy parameters of 11 divalent metallic cations, namely, Mg(2+), V(2+), Cr(2+), Mn(2+), Fe(2+), Co(2+), Ni(2+), Zn(2+), Cd(2+), Sn(2+), and Hg(2+), that are optimized to be compatible with three widely used water models (TIP3P, SPC/E, and TIP4P-EW). The three sets of metal parameters reproduce simultaneously the solvation free energies (ΔGsol), the ion-oxygen distance in the first solvation shell (IOD), and coordination numbers (CN) in explicit water with a relative error less than 1%. The main sources of errors to ΔGsol that arise from the boundary conditions and treatment of electrostatic interactions are corrected rationally, which ensures the independence of the proposed parameters on the methodology used in the calculation. This work will be of great value for the computational study of metal-containing biological systems. PMID:27182744

  13. Materials by design: methodological developments in the calculation of excited-state properties

    NASA Astrophysics Data System (ADS)

    Govoni, Marco

    Density functional theory (DFT) is one of the main tools used in first principle simulations of materials; however several of the current approximations of exchange and correlation functionals do not provide the level of accuracy required for predictive calculations of excited state properties. The application to heterogeneous systems of more accurate post-DFT approaches such as Many-Body Perturbation Theory (MBPT) - for example to nanostructured, disordered, and defective materials - has been hindered by high computational costs. In this talk recent methodological developments in MBPT calculations will be discussed, as recently implemented in the open source code WEST, which efficiently exploits HPC architectures. Results using a formulation that does not require the explicit calculation of virtual states, nor the storage and inversion of large dielectric matrices will be presented; these results include quasi particle energies for systems with thousands of electrons and encompass the electronic structure of aqueous solutions, spin defects in insulators, and benchmarks for molecules and solids containing heavy elements. Simplifications of MBPT calculations based on the use of static response properties, such as dielectric-dependent hybrid functionals, will also be discussed. Work done in collaboration with Hosung Seo, Peter Scherpelz, Ikutaro Hamada, Jonathan Skone, Alex Gaiduk, T. Anh Pham, and Giulia Galli. Supported by DOE-BES.

  14. A Methodology for the Design and Verification of Globally Asynchronous/Locally Synchronous Architectures

    NASA Technical Reports Server (NTRS)

    Miller, Steven P.; Whalen, Mike W.; O'Brien, Dan; Heimdahl, Mats P.; Joshi, Anjali

    2005-01-01

    Recent advanced in model-checking have made it practical to formally verify the correctness of many complex synchronous systems (i.e., systems driven by a single clock). However, many computer systems are implemented by asynchronously composing several synchronous components, where each component has its own clock and these clocks are not synchronized. Formal verification of such Globally Asynchronous/Locally Synchronous (GA/LS) architectures is a much more difficult task. In this report, we describe a methodology for developing and reasoning about such systems. This approach allows a developer to start from an ideal system specification and refine it along two axes. Along one axis, the system can be refined one component at a time towards an implementation. Along the other axis, the behavior of the system can be relaxed to produce a more cost effective but still acceptable solution. We illustrate this process by applying it to the synchronization logic of a Dual Fight Guidance System, evolving the system from an ideal case in which the components do not fail and communicate synchronously to one in which the components can fail and communicate asynchronously. For each step, we show how the system requirements have to change if the system is to be implemented and prove that each implementation meets the revised system requirements through modelchecking.

  15. Design of measurement methodology for the evaluation of human exposure to vibration in residential environments.

    PubMed

    Sica, G; Peris, E; Woodcock, J S; Moorhouse, A T; Waddington, D C

    2014-06-01

    Exposure-response relationships are important tools for policy makers to assess the impact of an environmental stressor on the populace. Their validity lies partly in their statistical strength which is greatly influenced by the size of the sample from which the relationship is derived. As such, the derivation of meaningful exposure-response relationships requires estimates of vibration exposure at a large number of receiver locations. In the United Kingdom a socio-vibrational survey has been conducted with the aim of deriving exposure-response relationships for annoyance due to vibration from (a) railway traffic and (b) the construction of a new light rail system. Response to vibration was measured via a questionnaire conducted face-to-face with residents in their own homes and vibration exposure was estimated using data from a novel measurement methodology. In total, 1281 questionnaires were conducted: 931 for vibration from railway traffic and 350 for vibration from construction sources. Considering the interdisciplinary nature of this work along with the volume of experimental data required, a number of significant technical and logistical challenges needed to be overcome through the planning and implementation of the fieldwork. Four of these challenges are considered in this paper: the site identification for providing a robust sample of the residents affected, the strategies used for measuring both exposure and response and the coordination between the teams carrying out the social survey and the vibration measurements.

  16. Rational Design of Methodology-Independent Metal Parameters Using a Nonbonded Dummy Model.

    PubMed

    Jiang, Yang; Zhang, Haiyang; Tan, Tianwei

    2016-07-12

    A nonbonded dummy model for metal ions is highly imperative for the computation of complex biological systems with for instance multiple metal centers. Here we present nonbonded dummy parameters of 11 divalent metallic cations, namely, Mg(2+), V(2+), Cr(2+), Mn(2+), Fe(2+), Co(2+), Ni(2+), Zn(2+), Cd(2+), Sn(2+), and Hg(2+), that are optimized to be compatible with three widely used water models (TIP3P, SPC/E, and TIP4P-EW). The three sets of metal parameters reproduce simultaneously the solvation free energies (ΔGsol), the ion-oxygen distance in the first solvation shell (IOD), and coordination numbers (CN) in explicit water with a relative error less than 1%. The main sources of errors to ΔGsol that arise from the boundary conditions and treatment of electrostatic interactions are corrected rationally, which ensures the independence of the proposed parameters on the methodology used in the calculation. This work will be of great value for the computational study of metal-containing biological systems.

  17. A Methodological Approach for Designating Management Zones in Mount Spil National Park, Turkey.

    PubMed

    Hepcan

    2000-09-01

    / This study was undertaken to (1) determine the suitability of ecosystems within Mount Spil National Park (Turkey) to human activities by a systematic zoning procedure, and (2) provide the basis for developing sound management strategies based on natural-cultural resource attributes of the park. After assessing natural-cultural resources and human activity requirements, the suitability of three zones (Strict Protection Zone, SPZ; Restricted Use Zone, RUZ; and Recreation and Administration Zone, RAZ) for proposed human activities/land uses was determined in order to maintain ecological sustainability and integrity through a weighting-ranking methodology, based on a grid cell resolution of 1 km x 1 km. Results showed that out of the three management zones, the RUZ in which the recreational activities that do not require physical developments are allowed constituted 82% of the park area as the first priority management zone. The proposed zoning procedure is believed to be a key step to improve management for both the study area and other national parks with the similar landscape features.

  18. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  19. The Telehealth Enhancement of Adherence to Medication in Pediatric IBD (TEAM) Trial: Design and Methodology

    PubMed Central

    Hommel, Kevin A.; Gray, Wendy N.; Hente, Elizabeth; Loreaux, Katherine; Ittenbach, Richard F.; Maddux, Michele; Baldassano, Robert; Sylvester, Francisco; Crandall, Wallace; Doarn, Charles; Heyman, Melvin B.; Keljo, David; Denson, Lee A.

    2015-01-01

    Medication nonadherence is a significant health care issue requiring regular behavioral treatment. Lack of sufficient health care resources and patient/family time commitment for weekly treatment are primary barriers to receiving appropriate self-management support. We describe the methodology of the Telehealth Enhancement of Adherence to Medication (TEAM) trial for medication nonadherence in pediatric inflammatory bowel disease (IBD). For this trial, participants 11–18 years of age will be recruited from seven pediatric hospitals and will complete an initial 4-week run in to assess adherence to a daily medication. Those who take less than 90% of their prescribed medication will be randomized. A total of 194 patients with IBD will be randomized to either a telehealth behavioral treatment (TBT) arm or education only (EO) arm. All treatment will be delivered via telehealth video conferencing. Patients will be assessed at baseline, post-treatment, 3-, 6-, and 12-months. We anticipate that participants in the TBT arm will demonstrate a statistically significant improvement at post-treatment and 3-, 6-, and 12-month follow-up compared to participants in the EO arm for both medication adherence and secondary outcomes (i.e., disease severity, patient quality of life, and health care utilization). If efficacious, the TEAM intervention could be disseminated broadly and reduce health care access barriers so that patients could receive much needed self-management intervention. PMID:26003436

  20. French investigations of high burnup effect on LOCA thermomechanical behavior: Part 1. Experimental programmes in support of LOCA design methodologies

    SciTech Connect

    Waeckel, N.; Cauvin, R.; Lebuffe, C.

    1997-01-01

    Within the framework of Burn-Up extension request, EDF, FRAMATOME, CEA and IPSN have carried out experimental programmes in order to provide the design of fuel rods under LOCA conditions with relevant data. The design methods used in France for LOCA are based on standard Appendix K methodology updated to take into account some penalties related to the actual conditions of the Nuclear Power Plant. Best-Estimate assessments are used as well. Experimental programmes concern plastic deformation and burst behavior of advanced claddings (EDGAR) and thermal shock quenching behavior of highly irradiated claddings (TAGCIR). The former reveals the important role played by the {alpha}/{beta} transformation kinetics related to advanced alloys (Niobium alloys) and the latter the significative impact of hydrogen charged during in-reactor corrosion on oxidation kinetics and failure behavior in terms of cooling rates.

  1. Progress in cavity and cryomodule design for the Project X linac

    SciTech Connect

    Champion, M.; Barbanotti, S.; Foley, M.; Ginsburg, S.; Gonin, I; Grimm, C.; Kerby, J.; Nagaitsev, S.; Nicol, T.; Peterson, T.; Ristori, L.; /Fermilab

    2011-03-01

    The continuous wave 3 GeV Project X Linac requires the development of two families of cavities and cryomodules at 325 and 650 MHz. The baseline design calls for three types of superconducting single-spoke resonators at 325 MHz having betas of 0.11, 0.22, and 0.42 and two types of superconducting five-cell elliptical cavities having betas of 0.61 and 0.9. These cavities shall accelerate a 1 mA H- beam initially and must support eventual operation at 4 mA. The electromagnetic and mechanical designs of the cavities are in progress and acquisition of prototypes is planned. The heat load to the cryogenic system is up to 25 W per cavity in the 650 MHz section, thus segmentation of the cryogenic system is a major issue in the cryomodule design. Designs for the two families of cryomodules are underway.

  2. Applications of different design methodologies in navigation systems and development at JPL

    NASA Technical Reports Server (NTRS)

    Thurman, S. W.

    1990-01-01

    The NASA/JPL deep space navigation system consists of a complex array of measurement systems, data processing systems, and support facilities, with components located both on the ground and on-board interplanetary spacecraft. From its beginings nearly 30 years ago, this system has steadily evolved and grown to meet the demands for ever-increasing navigation accuracy placed on it by a succession of unmanned planetary missions. Principal characteristics of this system are its capabilities and great complexity. Three examples in the design and development of interplanetary space navigation systems are examined in order to make a brief assessment of the usefulness of three basic design theories, known as normative, rational, and heuristic. Evaluation of the examples indicates that a heuristic approach, coupled with rational-based mathematical and computational analysis methods, is used most often in problems such as orbit determination strategy development and mission navigation system design, while normative methods have seen only limited use is such applications as the development of large software systems and in the design of certain operational navigation subsystems.

  3. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  4. Designing Tasks with Interactive Geometry Applets for Use in Research: Some Methodological Issues

    ERIC Educational Resources Information Center

    Sinclair, Margaret

    2006-01-01

    This paper discusses some of the results of a study carried out with two classes of grade 7 students (11-12 years old); the aim of the project was to design, develop, and test interactive geometry tasks for use in future research into how (or whether) interactive applets help students learn mathematics. The study tasks were developed around the…

  5. Investigating an Open Methodology for Designing Domain-Specific Language Collections

    ERIC Educational Resources Information Center

    Fitzgerald, Alannah; Wu, Shaoqun; Barge, Martin

    2014-01-01

    With this research and design paper, we are proposing that Open Educational Resources (OERs) and Open Access (OA) publications give increasing access to high quality online educational and research content for the development of powerful domain-specific language collections that can be further enhanced linguistically with the Flexible Language…

  6. Middle School Mathematics PD Study: Study Design and Methodology. Paper #1

    ERIC Educational Resources Information Center

    Stancavage, Fran; Garet, Michael; Wayne, Andrew

    2010-01-01

    The PD program evaluated in this study is designed to address the problem of low student achievement in topics in rational numbers. The study focuses on seventh grade, the culminating year for teaching those topics. The study randomly assigned 77 mid- and high-poverty schools from 12 districts to treatment and control conditions and collected…

  7. Developing a User Oriented Design Methodology for Learning Activities Using Boundary Objects

    ERIC Educational Resources Information Center

    Fragou, ?lga; Kameas, Achilles

    2013-01-01

    International Standards in High and Open and Distance Education are used for developing Open Educational Resources (OERs). Current issues in e-learning community are the specification of learning chunks and the definition of describing designs for different units of learning (activities, units, courses) in a generic though expandable format.…

  8. Single Group, Pre- and Post-Test Research Designs: Some Methodological Concerns

    ERIC Educational Resources Information Center

    Marsden, Emma; Torgerson, Carole J.

    2012-01-01

    This article provides two illustrations of some of the factors that can influence findings from pre- and post-test research designs in evaluation studies, including regression to the mean (RTM), maturation, history and test effects. The first illustration involves a re-analysis of data from a study by Marsden (2004), in which pre-test scores are…

  9. Optimal groundwater remediation design: Methodologies and software for contaminated aquifers. Final report

    SciTech Connect

    Dougherty, D.E.

    1994-10-31

    This document comprises the final report of work performed under sub-contract B-239648 between the Lawrence Livermore National Laboratory (LLNL) and the University of Vermont (UVM). This contract was subsidiary to one between LLNL and the U.S. Department of Energy (DOE). This project had the goal of developing tools and strategies regarding how and where and when to apply the environmental restoration (ER) technologies that are under development. The development of decision support software for advanced environmental remediation technologies is tentative; many of the ER technologies are poorly understood, the applicability of methods to new untested sites is questionable, the ability to predict the effects of alternative remediation designs is very limited, and there are a large number of uncertainties associated with processes and parameters (physical, chemical, and biological), contaminants (distribution and type), and sociopolitical environment. Nevertheless, the potential for significant savings by using optimal design methods and the need to make decisions regardless of uncertainties has made this project worthy. A stop-work order was received in September 1994. An additional upper limit of $15,000 was provided for project termination activities, including report preparation. One of four deliverables was completed and provided to LLNL. MODLP is a computational tool for use in groundwater remediation design. It is a FORTRAN program that incorporates the well known and widely used MODFLOW simulator to represent flow of water in a saturated natural porous medium. MODLP is designed to allow the user to create and solve optimization problems for hydraulic control in groundwater systems. Inasmuch as environmental restoration costs are very large, savings of on the order of ten percent represent significant amounts, and optimal design has been demonstrated to help produce savings larger than ten percent, these activities have an important role to play within DOE.

  10. Theoretical and methodological elements for integrating ethics as a foundation into the education of professional and design disciplines.

    PubMed

    d'Anjou, Philippe

    2004-04-01

    The paper addresses the integration of ethics into professional education related to the disciplines responsible for the conception and creation of the artificial (artefactual or technology). The ontological-epistemological paradigm of those disciplines is understood within the frame of the sciences of the artificial as established by Herbert Simon (1969). According to that paradigm, those sciences include disciplines not only related to the production of artefacts (technology), such as engineering, architecture, industrial design, etc, but also disciplines related to devised courses of action aimed at changing existing situations into preferred ones, like medicine, law, education, etc. They are centered on intentional action and at their core is the activity of design, which is their common foundation and attitude, or their common culture. The science of design becomes the broader foundational discipline for any professions engaged in the intentional transformation of the world. The main distinction between design disciplines and scientific ones rests on the object-project dichotomy. Indeed, contrary to Science that sees the world as an object to be observed, Design sees the world as a project and acts upon the world through projects, which are grounded in intentions, ends, and values. Design disciplines are meant to transform the world, or part of it, and are teleological. Being so, they are embodied in an act that is ethical and their ontology-epistemology must be addressed also through practical reason to resituate all professional disciplines according to their involved nature. The paper introduces theoretical, methodological, and ethical elements to establish a model that integrates ethics into the education of the professional disciplines, design-based disciplines, responsible for the creation of the artificial, artefactual or technological, world. The model is articulated around the notions of ethical engagement and responsibility through the act of design

  11. Design of integrated pitch axis for autopilot/autothrottle and integrated lateral axis for autopilot/yaw damper for NASA TSRV airplane using integral LQG methodology

    NASA Technical Reports Server (NTRS)

    Kaminer, Isaac; Benson, Russell A.; Coleman, Edward E.; Ebrahimi, Yaghoob S.

    1990-01-01

    Two designs are presented for control systems for the NASA Transport System Research Vehicle (TSRV) using integral Linear Quadratic Gaussian (LQG) methodology. The first is an integrated longitudinal autopilot/autothrottle design and the second design is an integrated lateral autopilot/yaw damper/sideslip controller design. It is shown that a systematic top-down approach to a complex design problem combined with proper application of modern control synthesis techniques yields a satisfactory solution in a reasonable period of time.

  12. Agent-based Cyber Control Strategy Design for Resilient Control Systems: Concepts, Architecture and Methodologies

    SciTech Connect

    Craig Rieger; Milos Manic; Miles McQueen

    2012-08-01

    The implementation of automated regulatory control has been around since the middle of the last century through analog means. It has allowed engineers to operate the plant more consistently by focusing on overall operations and settings instead of individual monitoring of local instruments (inside and outside of a control room). A similar approach is proposed for cyber security, where current border-protection designs have been inherited from information technology developments that lack consideration of the high-reliability, high consequence nature of industrial control systems. Instead of an independent development, however, an integrated approach is taken to develop a holistic understanding of performance. This performance takes shape inside a multiagent design, which provides a notional context to model highly decentralized and complex industrial process control systems, the nervous system of critical infrastructure. The resulting strategy will provide a framework for researching solutions to security and unrecognized interdependency concerns with industrial control systems.

  13. A logical approach to optimize the nanostructured lipid carrier system of irinotecan: efficient hybrid design methodology

    NASA Astrophysics Data System (ADS)

    Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama

    2013-01-01

    Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.

  14. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  15. Methodology for design of active controls for V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Cicolani, L.

    1976-01-01

    An effort to develop techniques for the design of integrated, fully automatic flight control systems for powered lift STOL and VTOL aircraft is described. The structure is discussed of the control system which has been developed to deal with the strong nonlinearities inherent in this class of aircraft, to admit automatic coupling with the advanced ATC requiring accurate execution of complex trajectories, and to admit a variety of active control tasks. The specific case considered is the Augmentor Wing Research Aircraft.

  16. Optimization of arsenic extraction in rice samples by Plackett-Burman design and response surface methodology.

    PubMed

    Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang

    2016-08-01

    Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA. PMID:26988503

  17. Friction in modern total hip arthroplasty bearings: Effect of material, design, and test methodology.

    PubMed

    Scholl, Laura; Longaray, Jason; Raja, Lokesh; Lee, Reginald; Faizan, Ahmad; Herrera, Lizeth; Thakore, Mayur; Nevelos, Jim

    2016-01-01

    The purpose of this study was to characterize the effect of a group of variables on frictional torque generated by acetabular components as well as to understand the influence of test model. Three separate test models, which had been previously used in the literature, were used to understand the effect of polyethylene material, bearing design, head size, and material combinations. Each test model differed by the way it simulated rotation of the head, the type of frictional torque value it reported (static vs. dynamic), and the type of motion simulated (oscillating motion vs. continuous motion). It was determined that not only test model may impact product ranking of fictional torque generated but also static frictional torque may be significantly larger than a dynamic frictional torque. In addition to test model differences, it was discovered that the frictional torque values for conventional and highly cross-linked polyethylenes were not statistically significantly different in the more physiologically relevant test models. With respect to bearing design, the frictional torque values for mobile bearing designs were similar to the 28-mm diameter inner bearing rather than the large diameter outer liner. Testing with a more physiologically relevant rotation showed that frictional torque increased with bearing diameter for the metal on polyethylene and ceramic on polyethylene bearings but remained constant for ceramic on ceramic bearings. Finally, ceramic on ceramic bearings produced smaller frictional torque values when compared to metal on polyethylene and ceramic on polyethylene groups.

  18. Improved quality-by-design compliant methodology for method development in reversed-phase liquid chromatography.

    PubMed

    Debrus, Benjamin; Guillarme, Davy; Rudaz, Serge

    2013-10-01

    A complete strategy dedicated to quality-by-design (QbD) compliant method development using design of experiments (DOE), multiple linear regressions responses modelling and Monte Carlo simulations for error propagation was evaluated for liquid chromatography (LC). The proposed approach includes four main steps: (i) the initial screening of column chemistry, mobile phase pH and organic modifier, (ii) the selectivity optimization through changes in gradient time and mobile phase temperature, (iii) the adaptation of column geometry to reach sufficient resolution, and (iv) the robust resolution optimization and identification of the method design space. This procedure was employed to obtain a complex chromatographic separation of 15 antipsychotic basic drugs, widely prescribed. To fully automate and expedite the QbD method development procedure, short columns packed with sub-2 μm particles were employed, together with a UHPLC system possessing columns and solvents selection valves. Through this example, the possibilities of the proposed QbD method development workflow were exposed and the different steps of the automated strategy were critically discussed. A baseline separation of the mixture of antipsychotic drugs was achieved with an analysis time of less than 15 min and the robustness of the method was demonstrated simultaneously with the method development phase.

  19. Comparative effectiveness research for the clinician researcher: a framework for making a methodological design choice.

    PubMed

    Williams, Cylie M; Skinner, Elizabeth H; James, Alicia M; Cook, Jill L; McPhail, Steven M; Haines, Terry P

    2016-01-01

    Comparative effectiveness research compares two active forms of treatment or usual care in comparison with usual care with an additional intervention element. These types of study are commonly conducted following a placebo or no active treatment trial. Research designs with a placebo or non-active treatment arm can be challenging for the clinician researcher when conducted within the healthcare environment with patients attending for treatment.A framework for conducting comparative effectiveness research is needed, particularly for interventions for which there are no strong regulatory requirements that must be met prior to their introduction into usual care. We argue for a broader use of comparative effectiveness research to achieve translatable real-world clinical research. These types of research design also affect the rapid uptake of evidence-based clinical practice within the healthcare setting.This framework includes questions to guide the clinician researcher into the most appropriate trial design to measure treatment effect. These questions include consideration given to current treatment provision during usual care, known treatment effectiveness, side effects of treatments, economic impact, and the setting in which the research is being undertaken.

  20. PROLIFERATION RESISTANCE AND PHYSICAL PROTECTION WORKING GROUP: METHODOLOGY AND APPLICATIONS

    SciTech Connect

    Bari R. A.; Whitlock, J.; Therios, I.U.; Peterson, P.F.

    2012-11-14

    We summarize the technical progress and accomplishments on the evaluation methodology for proliferation resistance and physical protection (PR and PP) of Generation IV nuclear energy systems. We intend the results of the evaluations performed with the methodology for three types of users: system designers, program policy makers, and external stakeholders. The PR and PP Working Group developed the methodology through a series of demonstration and case studies. Over the past few years various national and international groups have applied the methodology to nuclear energy system designs as well as to developing approaches to advanced safeguards.

  1. Strain-Based Design Methodology of Large Diameter Grade X80 Linepipe

    SciTech Connect

    Lower, Mark D.

    2014-04-01

    Continuous growth in energy demand is driving oil and natural gas production to areas that are often located far from major markets where the terrain is prone to earthquakes, landslides, and other types of ground motion. Transmission pipelines that cross this type of terrain can experience large longitudinal strains and plastic circumferential elongation as the pipeline experiences alignment changes resulting from differential ground movement. Such displacements can potentially impact pipeline safety by adversely affecting structural capacity and leak tight integrity of the linepipe steel. Planning for new long-distance transmission pipelines usually involves consideration of higher strength linepipe steels because their use allows pipeline operators to reduce the overall cost of pipeline construction and increase pipeline throughput by increasing the operating pressure. The design trend for new pipelines in areas prone to ground movement has evolved over the last 10 years from a stress-based design approach to a strain-based design (SBD) approach to further realize the cost benefits from using higher strength linepipe steels. This report presents an overview of SBD for pipelines subjected to large longitudinal strain and high internal pressure with emphasis on the tensile strain capacity of high-strength microalloyed linepipe steel. The technical basis for this report involved engineering analysis and examination of the mechanical behavior of Grade X80 linepipe steel in both the longitudinal and circumferential directions. Testing was conducted to assess effects on material processing including as-rolled, expanded, and heat treatment processing intended to simulate coating application. Elastic-plastic and low-cycle fatigue analyses were also performed with varying internal pressures. Proposed SBD models discussed in this report are based on classical plasticity theory and account for material anisotropy, triaxial strain, and microstructural damage effects

  2. Study designs for identifying risk compensation behavior among users of biomedical HIV prevention technologies: Balancing methodological rigor and research ethics

    PubMed Central

    Underhill, Kristen

    2014-01-01

    The growing evidence base for biomedical HIV prevention interventions – such as oral pre-exposure prophylaxis, microbicides, male circumcision, treatment as prevention, and eventually prevention vaccines – has given rise to concerns about the ways in which users of these biomedical products may adjust their HIV risk behaviors based on the perception that they are prevented from infection. Known as risk compensation, this behavioral adjustment draws on the theory of “risk homeostasis,” which has previously been applied to phenomena as diverse as Lyme disease vaccination, insurance mandates, and automobile safety. Little rigorous evidence exists to answer risk compensation concerns in the biomedical HIV prevention literature, in part because the field has not systematically evaluated the study designs available for testing these behaviors. The goals of this Commentary are to explain the origins of risk compensation behavior in risk homeostasis theory, to reframe risk compensation as a testable response to the perception of reduced risk, and to assess the methodological rigor and ethical justification of study designs aiming to isolate risk compensation responses. Although the most rigorous methodological designs for assessing risk compensation behavior may be unavailable due to ethical flaws, several strategies can help investigators identify potential risk compensation behavior during Phase II, Phase III, and Phase IV testing of new technologies. Where concerns arise regarding risk compensation behavior, empirical evidence about the incidence, types, and extent of these behavioral changes can illuminate opportunities to better support the users of new HIV prevention strategies. This Commentary concludes by suggesting a new way to conceptualize risk compensation behavior in the HIV prevention context. PMID:23597916

  3. A design methodology using signal-to-noise ratio for plastic scintillation detectors design and performance optimization

    PubMed Central

    Lacroix, Frédéric; Beddar, A. Sam; Guillot, Mathieu; Beaulieu, Luc; Gingras, Luc

    2009-01-01

    Purpose: The design of novel plastic scintillation detectors (PSDs) is impeded by the lack of a suitable framework to simulate and predict their performance. The authors propose to use the signal-to-noise ratio (SNR) to model the performance of PSDs that use charge-coupled devices (CCDs) as photodetectors. Methods: In PSDs using CCDs, the SNR is inversely related to the normalized standard deviation of the dose measurement. Thus, optimizing the SNR directly optimizes the system’s precision. In this work, a model of SNR as a function of the system parameters is derived for optical fiber-based PSD systems. Furthermore, this proposed model is validated using experimental results. A formula for the efficiency of fiber coupling to CCDs is derived and used to simulate the performance of a PSD under varying magnifications. Results: The proposed model is shown to simulate the experimental performance of an actual PSD to a suitable degree of accuracy under various conditions. Conclusions: The SNR constitutes a useful tool to simulate the dosimetric precision of PSDs. Using the SNR model, recommendations for the design and optimization of PSDs are provided. Using the same framework, recommendations for non-fiber-based PSDs are also provided. PMID:19994531

  4. A straightforward methodology for designing continuous monoclonal antibody capture multi-column chromatography processes.

    PubMed

    Gjoka, Xhorxhi; Rogler, Karl; Martino, Richard Alexander; Gantier, Rene; Schofield, Mark

    2015-10-16

    A simple process development strategy for continuous capture multi-column chromatography (MCC) is described. The approach involves a few single column breakthrough experiments, based on several simplifying observations that enable users to rapidly convert batch processes into well-designed multi-column processes. The method was validated using a BioSMB(®) (Pall Life Sciences) lab scale multi-column system and a mAb capture process employing Protein A resin. The approach enables users to optimize MCC processes based on their internal preferences and constraints without requiring any mathematical modeling expertise.

  5. Hi-alpha forebody design. Part 1: Methodology base and initial parametrics

    NASA Technical Reports Server (NTRS)

    Mason, William H.; Ravi, R.

    1992-01-01

    The use of Computational Fluid Dynamics (CFD) has been investigated for the analysis and design of aircraft forebodies at high angle of attack combined with sideslip. The results of the investigation show that CFD has reached a level of development where computational methods can be used for high angle of attack aerodynamic design. The classic wind tunnel experiment for the F-5A forebody directional stability has been reproduced computationally over an angle of attack range from 10 degrees to 45 degrees, and good agreement with experimental data was obtained. Computations have also been made at combined angle of attack and sideslip over a chine forebody, demonstrating the qualitative features of the flow, although not producing good agreement with measured experimental pressure distributions. The computations were performed using the code known as cfl3D for both the Euler equations and the Reynolds equations using a form of the Baldwin-Lomax turbulence model. To study the relation between forebody shape and directional stability characteristics, a generic parametric forebody model has been defined which provides a simple analytic math model with flexibility to capture the key shape characteristics of the entire range of forebodies of interest, including chines.

  6. Sweetener blend optimization by using mixture design methodology and the electronic tongue.

    PubMed

    Waldrop, Megan E; Ross, Carolyn F

    2014-09-01

    Utilizing more than one sweetener has been shown to be an effective way to substitute sucrose in food products. The objective of this study was to apply the augmented simplex-centroid mixture design for the optimization of acceptable sweetener blends using coconut sugar, agave, and stevia. Sweetener blends were evaluated in aqueous solutions and gluten-free granola bars by a trained panel and consumers (n = 60). Significant differences were found between sweetener mixtures in solutions by both panelists and consumers (P < 0.05). Taste profiles for the sweetener solutions were also generated using the electronic tongue. Most consumer and trained intensity ratings were highly correlated (R(2) ≥ 0.79) with the electronic tongue taste profile analysis. Granola bars were also found to be significantly different (P < 0.05), with consumers preferring coconut sugar mixtures. Using contour plots and desirability function analysis, an optimal sweetener combination was found for a granola bar formulation of 89.9% coconut sugar, 6.1% agave, and 4% stevia. These results indicate that a mixture design can be a reliable way to develop new sweetener blends for product development.

  7. Sweetener blend optimization by using mixture design methodology and the electronic tongue.

    PubMed

    Waldrop, Megan E; Ross, Carolyn F

    2014-09-01

    Utilizing more than one sweetener has been shown to be an effective way to substitute sucrose in food products. The objective of this study was to apply the augmented simplex-centroid mixture design for the optimization of acceptable sweetener blends using coconut sugar, agave, and stevia. Sweetener blends were evaluated in aqueous solutions and gluten-free granola bars by a trained panel and consumers (n = 60). Significant differences were found between sweetener mixtures in solutions by both panelists and consumers (P < 0.05). Taste profiles for the sweetener solutions were also generated using the electronic tongue. Most consumer and trained intensity ratings were highly correlated (R(2) ≥ 0.79) with the electronic tongue taste profile analysis. Granola bars were also found to be significantly different (P < 0.05), with consumers preferring coconut sugar mixtures. Using contour plots and desirability function analysis, an optimal sweetener combination was found for a granola bar formulation of 89.9% coconut sugar, 6.1% agave, and 4% stevia. These results indicate that a mixture design can be a reliable way to develop new sweetener blends for product development. PMID:25155461

  8. Evaluation of architectures for an ASP MPEG-4 decoder using a system-level design methodology

    NASA Astrophysics Data System (ADS)

    Garcia, Luz; Reyes, Victor; Barreto, Dacil; Marrero, Gustavo; Bautista, Tomas; Nunez, Antonio

    2005-06-01

    Trends in multimedia consumer electronics, digital video and audio, aim to reach users through low-cost mobile devices connected to data broadcasting networks with limited bandwidth. An emergent broadcasting network is the digital audio broadcasting network (DAB) which provides CD quality audio transmission together with robustness and efficiency techniques to allow good quality reception in motion conditions. This paper focuses on the system-level evaluation of different architectural options to allow low bandwidth digital video reception over DAB, based on video compression techniques. Profiling and design space exploration techniques are applied over the ASP MPEG-4 decoder in order to find out the best HW/SW partition given the application and platform constraints. An innovative SystemC-based system-level design tool, called CASSE, is being used for modelling, exploration and evaluation of different ASP MPEG-4 decoder HW/SW partitions. System-level trade offs and quantitative data derived from this analysis are also presented in this work.

  9. A Design Methodology for Rapid Implementation of Active Control Systems Across Lean Direct Injection Combustor Platforms

    NASA Technical Reports Server (NTRS)

    Baumann, William T.; Saunders, William R.; Vandsburger, Uri; Saus, Joseph (Technical Monitor)

    2003-01-01

    The VACCG team is comprised of engineers at Virginia Tech who specialize in the subject areas of combustion physics, chemical kinetics, dynamics and controls, and signal processing. Currently, the team's work on this NRA research grant is designed to determine key factors that influence combustion control performance through a blend of theoretical and experimental investigations targeting design and demonstration of active control for three different combustors. To validiate the accuracy of conclusions about control effectiveness, a sequence of experimental verifications on increasingly complex lean, direct injection combustors is underway. During the work period January 1, 2002 through October 15, 2002, work has focused on two different laboratory-scale combustors that allow access for a wide variety of measurements. As the grant work proceeds, one key goal will be to obtain certain knowledge about a particular combustor process using a minimum of sophisticated measurements, due to the practical limitations of measurements on full-scale combustors. In the second year, results obtained in the first year will be validated on test combustors to be identified in the first quarter of that year. In the third year, it is proposed to validate the results at more realistic pressure and power levels by utilizing the facilities at the Glenn Research Center.

  10. Methodology for the design, production, and test of plastic optical displacement sensors

    NASA Astrophysics Data System (ADS)

    Rahlves, Maik; Kelb, Christian; Reithmeier, Eduard; Roth, Bernhard

    2016-08-01

    Optical displacement sensors made entirely from plastic materials offer various advantages such as biocompatibility and high flexibility compared to their commonly used electrical and glass-based counterparts. In addition, various low-cost and large-scale fabrication techniques can potentially be utilized for their fabrication. In this work we present a toolkit for the design, production, and test of such sensors. Using the introduced methods, we demonstrate the development of a simple all-optical displacement sensor based on multimode plastic waveguides. The system consists of polymethylmethacrylate and cyclic olefin polymer which serve as cladding and core materials, respectively. We discuss several numerical models which are useful for the design and simulation of the displacement sensors as well as two manufacturing methods capable of mass-producing such devices. Prior to fabrication, the sensor layout and performance are evaluated by means of a self-implemented ray-optical simulation which can be extended to various other types of sensor concepts. Furthermore, we discuss optical and mechanical test procedures as well as a high-precision tensile testing machine especially suited for the characterization of the opto-mechanical performance of such plastic optical displacement sensors.

  11. Application of finite element, global polynomial, and kriging response surfaces in Progressive Lattice Sampling designs

    SciTech Connect

    ROMERO,VICENTE J.; SWILER,LAURA PAINTON; GIUNTA,ANTHONY A.

    2000-04-25

    This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.

  12. Design progress of the solar UV-Vis-IR telescope (SUVIT) aboard SOLAR-C

    NASA Astrophysics Data System (ADS)

    Katsukawa, Y.; Ichimoto, K.; Suematsu, Y.; Hara, H.; Kano, R.; Shimizu, T.; Matsuzaki, K.

    2013-09-01

    We present a design progress of the Solar UV-Vis-IR Telescope (SUVIT) aboard the next Japanese solar mission SOLAR-C. SUVIT has an aperture diameter of ~1.4 m for achieving spectro-polarimetric observations with spatial and temporal resolution exceeding the Hinode Solar Optical Telescope (SOT). We have studied structural and thermal designs of the optical telescope as well as the optical interface between the telescope and the focal plane instruments. The focal plane instruments are installed into two packages, filtergraph and spectrograph packages. The spectropolarimeter is the instrument dedicated to accurate polarimetry in the three spectrum windows at 525 nm, 854 nm, and 1083 nm for observing magnetic fields at both the photospheric and chromospheric layers. We made optical design of the spectrograph accommodating the conventional slit spectrograph and the integral field unit (IFU) for two-dimensional coverage. We are running feasibility study of the IFU using fiber arrays consisting of rectangular cores.

  13. Designing and measuring the progress and impact of health research capacity strengthening initiatives

    PubMed Central

    2015-01-01

    Strengthening capacity in poorer countries to generate multi-disciplinary health research and to utilise research findings, is one of the most effective ways of advancing the countries' health and development. This paper explores current knowledge about how to design health research capacity strengthening (RCS) programmes and how to measure their progress and impact. It describes a systematic, evidence-based approach for designing such programmes and highlights some of the key challenges that will be faced in the next 10 years. These include designing and implementing common frameworks to facilitate comparisons among capacity strengthening projects, and developing monitoring indicators that can capture their interactions with knowledge users and their impact on changes in health systems.

  14. A methodology for the design and evaluation of user interfaces for interactive information systems. Ph.D. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Farooq, Mohammad U.

    1986-01-01

    The definition of proposed research addressing the development and validation of a methodology for the design and evaluation of user interfaces for interactive information systems is given. The major objectives of this research are: the development of a comprehensive, objective, and generalizable methodology for the design and evaluation of user interfaces for information systems; the development of equations and/or analytical models to characterize user behavior and the performance of a designed interface; the design of a prototype system for the development and administration of user interfaces; and the design and use of controlled experiments to support the research and test/validate the proposed methodology. The proposed design methodology views the user interface as a virtual machine composed of three layers: an interactive layer, a dialogue manager layer, and an application interface layer. A command language model of user system interactions is presented because of its inherent simplicity and structured approach based on interaction events. All interaction events have a common structure based on common generic elements necessary for a successful dialogue. It is shown that, using this model, various types of interfaces could be designed and implemented to accommodate various categories of users. The implementation methodology is discussed in terms of how to store and organize the information.

  15. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    PubMed

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented

  16. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    PubMed

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented.

  17. Methodology to improve design of accelerated life tests in civil engineering projects.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods.

  18. Methodology to Improve Design of Accelerated Life Tests in Civil Engineering Projects

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods. PMID:25111800

  19. Basis of human factors methodology applied in the Westinghouse AP600 design

    SciTech Connect

    Carrera, J.P.; Easter, J.R. )

    1992-01-01

    The incident at Three Mile Island Unit 2 brought about an awareness that there is a need for a new perspective on nuclear power plant operator performance. It was discerned that besides executing control actions, the operator needs an additional role, that of systems supervisor-someone who considers plant health at the functional level of how all the plant processes are related and how they perform with regard to the high-level operational goals of the plant. Westinghouse has taken the initiative to apply these ideas in dealing with the operator by studying the work of Rasmussen of Denmark's Riso Laboratory, regarding knowledge-based behavior and the requirements for supporting the cognitive processes required of an operator. This has led to the Westinghouse Man-Machine-Interface System (MMIS) design process.

  20. Designing multidisciplinary longitudinal studies of human development: analyzing past research to inform methodology.

    PubMed

    Shulruf, Boaz; Morton, Susan; Goodyear-Smith, Felicity; O'Loughlin, Claire; Dixon, Robyn

    2007-09-01

    This review identifies key issues associated with the design of future longitudinal studies of human development. Sixteen international studies were compared for initial response and retention rate, sample size, type of data collected, and sampling frames. The studies had little information about the influences of fathers, extended family members, childcare, and educational institutions; the effects of peers; children's use of time; the needs of disabled children; urban versus rural environments; or the influence of genetic factors. A contemporary longitudinal study should include measures of physical and mental health, cognitive capacity, educational attainment, social adjustment, conduct and behavior, resiliency, and risk-taking behaviors. It needs to address genetic and intergenerational factors, cultural identity, and the influences of neighborhood, community, and wider social and political environments and to encompass outcomes at all life stages to systematically determine the role each factor plays in individuals' lives, including interactions within and across variables.

  1. Characterization and optimization of acoustic filter performance by experimental design methodology.

    PubMed

    Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M

    2005-06-20

    Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures.

  2. The Carriage Of Multiresistant Bacteria After Travel (COMBAT) prospective cohort study: methodology and design

    PubMed Central

    2014-01-01

    Background Antimicrobial resistance (AMR) is one of the major threats to public health around the world. Besides the intense use and misuse of antimicrobial agents as the major force behind the increase in antimicrobial resistance, the exponential increase of international travel may also substantially contribute to the emergence and spread of AMR. However, knowledge on the extent to which international travel contributes to this is still limited. The Carriage Of Multiresistant Bacteria After Travel (COMBAT) study aims to 1. determine the acquisition rate of multiresistant Enterobacteriaceae during foreign travel 2. ascertain the duration of carriage of these micro-organisms 3. determine the transmission rate within households 4. identify risk factors for acquisition, persistence of carriage and transmission of multiresistant Enterobacteriaceae. Methods/design The COMBAT-study is a large-scale multicenter longitudinal cohort study among travellers (n = 2001) and their non-travelling household members (n = 215). Faecal samples are collected before and immediately after travel and 1 month after return from all participants. Follow-up faecal samples are collected 3, 6 and 12 months after return from travellers (and their non-travelling household members) who acquired multiresistant Enterobacteriaceae. Questionnaires are collected from all participants at each time-point. Faecal samples are screened phenotypically for the presence of extended-spectrum beta-lactamase (ESBL) or carbapenemase-producing Enterobacteriaceae. Positive post-travel isolates from travellers with negative pre-travel samples are genotypically analysed for ESBL and carbapenemase genes with microarray and gene sequencing. Discussion The design and scale of the COMBAT-study will enable us to provide much needed detailed insights into the risks and dynamics of introduction and spread of ESBL- and carbapenemase-producing Enterobacteriaceae by healthy travellers and the potential need and

  3. Maintaining Exercise and Healthful Eating in Older Adults: The SENIOR Project II: Study Design and Methodology

    PubMed Central

    Clark, Phillip G.; Blissmer, Bryan J.; Greene, Geoffrey W.; Lees, Faith D.; Riebe, Deborah A.; Stamm, Karen E.

    2015-01-01

    The Study of Exercise and Nutrition in Older Rhode Islanders (SENIOR) Project II is an intervention study to promote the maintenance of both exercise and healthful eating in older adults. It is the second phase of an earlier study, SENIOR Project I, that originally recruited 1,277 community-dwelling older adults to participate in behavior-specific interventions designed to increase exercise and/or fruit and vegetable consumption. The general theoretical framework for this research is the Transtheoretical Model (TTM) of Health Behavior Change. The current intervention occurs over a 48-month period, using a manual, newsletters, and phone coaching calls. Annual assessments collect standardized data on behavioral outcomes (exercise and diet), TTM variables (stage of change and self-efficacy), psychosocial variables (social support, depression, resilience, and life satisfaction), physical activity and functioning (SF-36, Up and Go, Senior Fitness Test, and disability assessment), cognitive functioning (Trail Making Test and Forward and Backward Digit Span), physical measures (height, weight, and waist circumference), and demographics. The SENIOR Project II is designed to answer the following question as its primary objective: (1) Does an individualized active-maintenance intervention with older adults maintain greater levels of healthful exercise and dietary behaviors for four years, compared to a control condition? In addition, there are two secondary objectives: (2) What are the psychosocial factors associated with the maintenance of health-promoting behaviors in the very old? and (3) What are the effects of the maintenance of health-promoting behaviors on reported health outcomes, psychosocial measures, anthropometrics, and cognitive status? PMID:20955821

  4. Designing assisted living technologies ‘in the wild’: preliminary experiences with cultural probe methodology

    PubMed Central

    2012-01-01

    Background There is growing interest in assisted living technologies to support independence at home. Such technologies should ideally be designed ‘in the wild’ i.e. taking account of how real people live in real homes and communities. The ATHENE (Assistive Technologies for Healthy Living in Elders: Needs Assessment by Ethnography) project seeks to illuminate the living needs of older people and facilitate the co-production with older people of technologies and services. This paper describes the development of a cultural probe tool produced as part of the ATHENE project and how it was used to support home visit interviews with elders with a range of ethnic and social backgrounds, family circumstances, health conditions and assisted living needs. Method Thirty one people aged 60 to 98 were visited in their homes on three occasions. Following an initial interview, participants were given a set of cultural probe materials, including a digital camera and the ‘Home and Life Scrapbook’ to complete in their own time for one week. Activities within the Home and Life Scrapbook included maps (indicating their relationships to people, places and objects), lists (e.g. likes, dislikes, things they were concerned about, things they were comfortable with), wishes (things they wanted to change or improve), body outline (indicating symptoms or impairments), home plan (room layouts of their homes to indicate spaces and objects used) and a diary. After one week, the researcher and participant reviewed any digital photos taken and the content of the Home and Life Scrapbook as part of the home visit interview. Findings The cultural probe facilitated collection of visual, narrative and material data by older people, and appeared to generate high levels of engagement from some participants. However, others used the probe minimally or not at all for various reasons including limited literacy, physical problems (e.g. holding a pen), lack of time or energy, limited emotional or

  5. Design and Implementation of an Evaluation Methodology for the NASA Faculty Fellowship Program (NFFP)

    NASA Astrophysics Data System (ADS)

    Estes, M. G.; Miller, M.; Freeman, M.; Watson, C.; Khalkho, M.; Smith, T.

    2005-12-01

    The NFFP was created in 2002 to accommodate the needs and capabilities of both NASA and the university community. The program combines aspects of two successful former NASA programs, the NASA/ASEE Summer Faculty Fellowship Program and the NASA/USRA JOint VEnture (JOVE) program. The NFFP contributes directly to NASA's strategic goal to "inspire and motivate students to pursue careers in science, technology, engineering, and mathematics", and NASA's Office of Education strategic objective to "strengthen NASA's involvement in higher education to enhance the nation's science and technology capability in NASA related fields to help meet NASA's future personnel needs." The primary goals of the NFFP are to increase the quality and quantity of research collaborations between NASA and the academic community that contribute to Agency research objectives; provide research opportunities for college and university faculty that serve to enrich their knowledge base; involve faculty in cutting-edge science and engineering challenges related to NASA's strategic enterprises, while providing exposure to the methods and practices of real-world research; facilitate interdisciplinary networking; and establish an effective education and outreach activity to foster greater awareness of the program. Participants are required to submit a research report and complete a program evaluation. The NFFP is evaluated using Web-based survey instruments in the NASA Education Evaluation Information System (NEEIS) that have been designed to collect data that measure program activities and accomplishments against program goals and NASA's education programs evaluation criteria. Data are collected from Faculty Fellows, NASA Colleagues, and students who accompanied Faculty Fellows. Participant Feedback Forms gather quantitative and qualitative information on research accomplishments, the benefits and impacts of the program, and overall program evaluation data. Follow-up feedback instruments are designed to

  6. Optimization of critical factors to enhance polyhydroxyalkanoates (PHA) synthesis by mixed culture using Taguchi design of experimental methodology.

    PubMed

    Venkata Mohan, S; Venkateswar Reddy, M

    2013-01-01

    Optimizing different factors is crucial for enhancement of mixed culture bioplastics (polyhydroxyalkanoates (PHA)) production. Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence and specific function of eight important factors (iron, glucose concentration, VFA concentration, VFA composition, nitrogen concentration, phosphorous concentration, pH, and microenvironment) on the bioplastics production. Three levels of factor (2(1) × 3(7)) variation were considered with symbolic arrays of experimental matrix [L(18)-18 experimental trails]. All the factors were assigned with three levels except iron concentration (2(1)). Among all the factors, microenvironment influenced bioplastics production substantially (contributing 81%), followed by pH (11%) and glucose concentration (2.5%). Validation experiments were performed with the obtained optimum conditions which resulted in improved PHA production. Good substrate degradation (as COD) of 68% was registered during PHA production. Dehydrogenase and phosphatase enzymatic activities were monitored during process operation. PMID:23201522

  7. Design and Optimization of a Process for Sugarcane Molasses Fermentation by Saccharomyces cerevisiae Using Response Surface Methodology

    PubMed Central

    El-Gendy, Nour Sh.; Madian, Hekmat R.; Amr, Salem S. Abu

    2013-01-01

    A statistical model was developed in this study to describe bioethanol production through a batch fermentation process of sugarcane molasses by locally isolated Saccharomyces cerevisiae Y-39. Response surface methodology RSM based on central composite face centered design CCFD was employed to statistically evaluate and optimize the conditions for maximum bioethanol production and study the significance and interaction of incubation period, initial pH, incubation temperature, and molasses concentration on bioethanol yield. With the use of the developed quadratic model equation, a maximum ethanol production of 255 g/L was obtained in a batch fermentation process at optimum operating conditions of approximately 71 h, pH 5.6, 38°C, molasses concentration 18% wt.%, and 100 rpm. PMID:24222769

  8. Some Thoughts on "Using Learning Progressions to Design Vertical Scales That Support Coherent Inferences about Student Growth"

    ERIC Educational Resources Information Center

    Kingston, Neal M.; Broaddus, Angela; Lao, Hongling

    2015-01-01

    Briggs and Peck (2015) have written a thought-provoking article on the use of learning progressions in the design of vertical scales that support inferences about student growth. Organized learning models, including learning trajectories, learning progressions, and learning maps have been the subject of research for many years, but more recently…

  9. Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James C.; Acikmese, Behcet; Mandic, Milan

    2012-01-01

    A software tool is used to demonstrate the feasibility of Touch and Go (TAG) sampling for Asteroid Sample Return missions. TAG is a concept whereby a spacecraft is in contact with the surface of a small body, such as a comet or asteroid, for a few seconds or less before ascending to a safe location away from the small body. Previous work at JPL developed the G-TAG simulation tool, which provides a software environment for fast, multi-body simulations of the TAG event. G-TAG is described in Multibody Simulation Software Testbed for Small-Body Exploration and Sampling, (NPO-47196) NASA Tech Briefs, Vol. 35, No. 11 (November 2011), p.54. This current innovation adapts this tool to a mission that intends to return a sample from the surface of an asteroid. In order to demonstrate the feasibility of the TAG concept, the new software tool was used to generate extensive simulations that demonstrate the designed spacecraft meets key requirements. These requirements state that contact force and duration must be sufficient to ensure that enough material from the surface is collected in the brushwheel sampler (BWS), and that the spacecraft must survive the contact and must be able to recover and ascend to a safe position, and maintain velocity and orientation after the contact.

  10. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment Using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David; Ordway, David; Johnson, Kenneth

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to shock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to shock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite shock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to shock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and shock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to shock induced loading.

  11. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  12. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment Using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David; Johnson, Kenneth

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  13. A novel study design for antibiotic trials in acute exacerbations of COPD: MAESTRAL methodology

    PubMed Central

    Wilson, Robert; Anzueto, Antonio; Miravitlles, Marc; Arvis, Pierre; Faragó, Geneviève; Haverstock, Daniel; Trajanovic, Mila; Sethi, Sanjay

    2011-01-01

    Antibiotics, along with oral corticosteroids, are standard treatments for acute exacerbations of chronic obstructive pulmonary disease (AECOPD). The ultimate aims of treatment are to minimize the impact of the current exacerbation, and by ensuring complete resolution, reduce the risk of relapse. In the absence of superiority studies of antibiotics in AECOPD, evidence of the relative efficacy of different drugs is lacking, and so it is difficult for physicians to select the most effective antibiotic. This paper describes the protocol and rationale for MAESTRAL (moxifloxacin in AECBs [acute exacerbation of chronic bronchitis] trial; www.clinicaltrials.gov: NCT00656747), one of the first antibiotic comparator trials designed to show superiority of one antibiotic over another in AECOPD. It is a prospective, multinational, multicenter, randomized, double-blind controlled study of moxifloxacin (400 mg PO [ per os] once daily for 5 days) vs amoxicillin/clavulanic acid (875/125 mg PO twice daily for 7 days) in outpatients with COPD and chronic bronchitis suffering from an exacerbation. MAESTRAL uses an innovative primary endpoint of clinical failure: the requirement for additional or alternate treatment for the exacerbation at 8 weeks after the end of antibiotic therapy, powered for superiority. Patients enrolled are those at high-risk of treatment failure, and all are experiencing an Anthonisen type I exacerbation. Patients are stratified according to oral corticosteroid use to control their effect across antibiotic treatment arms. Secondary endpoints include quality of life, symptom assessments and health care resource use. PMID:21760724

  14. Design methodology and performance analysis of a wideband 90° phase switch for radiometer applications

    SciTech Connect

    Villa, Enrique Aja, Beatriz; Cagigas, Jaime; Fuente, Luisa de la; Artal, Eduardo

    2013-12-15

    This paper presents the analysis, design, and characterization of a wideband 90° phase switch in Ka-band. The phase switch is based on two microstrip bandpass filters in which the commutation is performed by a novel single-pole double-throw (SPDT) switch. The analysis of π-network bandpass filters is provided, obtaining the phase difference and amplitude imbalance between filters and their scattering parameters; tested results show an average phase difference of 88.9° ± 5° and an amplitude imbalance of 0.15 dB from 24 to 37 GHz. The new broadband SPDT switch is based on a coplanar waveguide-to-slotline-to-microstrip structure, which enables a full planar integration with shifting branches. PIN diodes are used to perform the switching between outputs. The SPDT shows isolation better than 19 dB, insertion loss of around 1.8 dB, and return loss better than 15 dB. The full integration of the phase switch achieves a return loss better than 11 dB and insertion loss of around 4 dB over the band 26–36 GHz, with an average phase difference of 87.1° ± 4° and an average amplitude imbalance of 0.3 dB. It provides an excellent performance for this frequency range, suitable for radio-astronomy receivers.

  15. Rationale, design and methodology for the Navajo Health and Nutrition Survey.

    PubMed

    White, L L; Goldberg, H I; Gilbert, T J; Ballew, C; Mendlein, J M; Peter, D G; Percy, C A; Mokdad, A H

    1997-10-01

    As recently as 1990, there was no reservation-wide, population-based health status information about Navajo Indians. To remedy this shortcoming, the Navajo Health and Nutrition Survey was conducted from 1991 to 1992 to assess the health and nutritional status of Navajo Reservation residents using a population-based sample. Using a three-stage design, a representative sample of reservation households was selected for inclusion. All members of selected households 12 y of age and older were invited to participate. A total of 985 people in 459 households participated in the study. Survey protocols were modeled on those of previous national surveys and included a standard blood chemistry profile, complete blood count, oral glucose tolerance test, blood pressure, anthropometric measurements, a single 24-h dietary recall and a questionnaire on health behaviors. The findings from this survey, reported in the accompanying papers, inform efforts to prevent and control chronic disease among the Navajo. Lessons learned from this survey may be of interest to those conducting similar surveys in other American Indian and Alaska Native populations. PMID:9339173

  16. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  17. Experimental Studies of the Heat Transfer to RBCC Rocket Nozzles for CFD Application to Design Methodologies

    NASA Technical Reports Server (NTRS)

    Santoro, Robert J.; Pal, Sibtosh

    1999-01-01

    Rocket thrusters for Rocket Based Combined Cycle (RBCC) engines typically operate with hydrogen/oxygen propellants in a very compact space. Packaging considerations lead to designs with either axisymmetric or two-dimensional throat sections. Nozzles tend to be either two- or three-dimensional. Heat transfer characteristics, particularly in the throat, where the peak heat flux occurs, are not well understood. Heat transfer predictions for these small thrusters have been made with one-dimensional analysis such as the Bartz equation or scaling of test data from much larger thrusters. The current work addresses this issue with an experimental program that examines the heat transfer characteristics of a gaseous oxygen (GO2)/gaseous hydrogen (GH2) two-dimensional compact rocket thruster. The experiments involved measuring the axial wall temperature profile in the nozzle region of a water-cooled gaseous oxygen/gaseous hydrogen rocket thruster at a pressure of 3.45 MPa. The wall temperature measurements in the thruster nozzle in concert with Bartz's correlation are utilized in a one-dimensional model to obtain axial profiles of nozzle wall heat flux.

  18. Design Methodology And Qualification Tests Results For A Highly Integrated And Space Qualified Point Of Load Converter

    NASA Astrophysics Data System (ADS)

    Vassal, Marie-Cecile; Dubus, Patrick; Fiant, Nicolas

    2011-10-01

    3D Plus developed a highly miniaturized and Space qualified Point of Load (POL) Converter to power modern fast digital electronics such as ASICs, FPGAs and Memory devices that require low voltages with a high precision regulation and excellent dynamic performances under large load transients. The POL Converter is hardened by design thanks to specific radiation effects mitigation techniques and space design de-rating rules. It is built with a space qualified 3D System-In-Package (SIP) technology and embeds 113 add-on parts spread over 3 stacked layers. Thanks to the unique 3D Plus technology, the device size is limited to 25 x 26.5 x 10 mm. This paper discuss the converter topology trade-offs and highlight some final design solutions implemented to achieve the best compromise between efficiency, dynamic performance, protection/flexibility and radiation hardening level. The product implementation and its electrical test results are presented. Also, the radiation hardening strategy, the Total Ionizing Dose (TID), Single Event Latch-up (SEL) and Single Event Effect (SEE) test methodology and the results are described. A special focus is done on SEE tests for which the POL Converter was rebuilt with "decap" add-on parts and exposed under the beam for detailed SEE behavior measurements.

  19. A sizing-design methodology for hybrid fuel cell power systems and its application to an unmanned underwater vehicle

    NASA Astrophysics Data System (ADS)

    Cai, Q.; Brett, D. J. L.; Browning, D.; Brandon, N. P.

    Hybridizing a fuel cell with an energy storage unit (battery or supercapacitor) combines the advantages of each device to deliver a system with high efficiency, low emissions, and extended operation compared to a purely fuel cell or battery/supercapacitor system. However, the benefits of such a system can only be realised if the system is properly designed and sized, based on the technologies available and the application involved. In this work we present a sizing-design methodology for hybridisation of a fuel cell with a battery or supercapacitor for applications with a cyclic load profile with two discrete power levels. As an example of the method's application, the design process for selecting the energy storage technology, sizing it for the application, and determining the fuel load/range limitations, is given for an unmanned underwater vehicle (UUV). A system level mass and energy balance shows that hydrogen and oxygen storage systems dominate the mass and volume of the energy system and consequently dictate the size and maximum mission duration of a UUV.

  20. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.