Science.gov

Sample records for risk-informed design methods

  1. Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

    SciTech Connect

    Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C.; Anderson, Richard M.

    2011-09-30

    This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.

  2. Risk Informed Design as Part of the Systems Engineering Process

    NASA Technical Reports Server (NTRS)

    Deckert, George

    2010-01-01

    This slide presentation reviews the importance of Risk Informed Design (RID) as an important feature of the systems engineering process. RID is based on the principle that risk is a design commodity such as mass, volume, cost or power. It also reviews Probabilistic Risk Assessment (PRA) as it is used in the product life cycle in the development of NASA's Constellation Program.

  3. Risk Informed Assessment of Regulatory and Design Requirements for Future Nuclear Power Plants - Final Technical Report

    SciTech Connect

    Ritterbusch, Stanley; Golay, Michael; Duran, Felicia; Galyean, William; Gupta, Abhinav; Dimitrijevic, Vesna; Malsch, Marty

    2003-01-29

    OAK B188 Summary of methods proposed for risk informing the design and regulation of future nuclear power plants. All elements of the historical design and regulation process are preserved, but the methods proposed for new plants use probabilistic risk assessment methods as the primary decision making tool.

  4. Risk Informed Design and Analysis Criteria for Nuclear Structures

    SciTech Connect

    Salmon, Michael W.

    2015-06-17

    Target performance can be achieved by defining design basis ground motion from results of a probabilistic seismic hazards assessment, and introducing known levels of conservatism in the design above the DBE. ASCE 4, 43, DOE-STD-1020 defined the DBE at 4x10-4 and introduce only slight levels of conservatism in response. ASCE 4, 43, DOE-STD-1020 assume code capacities shoot for about 98% NEP. There is a need to have a uniform target (98% NEP) for code developers (ACI, AISC, etc.) to aim for. In considering strengthening options, one must also consider cost/risk reduction achieved.

  5. Risk-Informed Safety Margin Characterization Methods Development Work

    SciTech Connect

    Smith, Curtis L; Ma, Zhegang; Tom Riley; Mandelli, Diego; Nielsen, Joseph W; Alfonsi, Andrea; Rabiti, Cristian

    2014-09-01

    This report summarizes the research activity developed during the Fiscal year 2014 within the Risk Informed Safety Margin and Characterization (RISMC) pathway within the Light Water Reactor Sustainability (LWRS) campaign. This research activity is complementary to the one presented in the INL/EXT-??? report which shows advances Probabilistic Risk Assessment Analysis using RAVEN and RELAP-7 in conjunction to novel flooding simulation tools. Here we present several analyses that prove the values of the RISMC approach in order to assess risk associated to nuclear power plants (NPPs). We focus on simulation based PRA which, in contrast to classical PRA, heavily employs system simulator codes. Firstly we compare, these two types of analyses, classical and RISMC, for a Boiling water reactor (BWR) station black out (SBO) initiating event. Secondly we present an extended BWR SBO analysis using RAVEN and RELAP-5 which address the comments and suggestions received about he original analysis presented in INL/EXT-???. This time we focus more on the stochastic analysis such probability of core damage and on the determination of the most risk-relevant factors. We also show some preliminary results regarding the comparison between RELAP5-3D and the new code RELAP-7 for a simplified Pressurized Water Reactors system. Lastly we present some conceptual ideas regarding the possibility to extended the RISMC capabilities from an off-line tool (i.e., as PRA analysis tool) to an online-tool. In this new configuration, RISMC capabilities can be used to assist and inform reactor operator during real accident scenarios.

  6. Risk-informed assessment of regulatory and design requirements for future nuclear power plants. Annual report

    SciTech Connect

    2000-08-01

    OAK B188 Risk-informed assessment of regulatory and design requirements for future nuclear power plants. Annual report. The overall goal of this research project is to support innovation in new nuclear power plant designs. This project is examining the implications, for future reactors and future safety regulation, of utilizing a new risk-informed regulatory system as a replacement for the current system. This innovation will be made possible through development of a scientific, highly risk-formed approach for the design and regulation of nuclear power plants. This approach will include the development and/or confirmation of corresponding regulatory requirements and industry standards. The major impediment to long term competitiveness of new nuclear plants in the U.S. is the capital cost component--which may need to be reduced on the order of 35% to 40% for Advanced Light Water Reactors (ALWRS) such as System 80+ and Advanced Boiling Water Reactor (ABWR). The required cost reduction for an ALWR such as AP600 or AP1000 would be expected to be less. Such reductions in capital cost will require a fundamental reevaluation of the industry standards and regulatory bases under which nuclear plants are designed and licensed. Fortunately, there is now an increasing awareness that many of the existing regulatory requirements and industry standards are not significantly contributing to safety and reliability and, therefore, are unnecessarily adding to nuclear plant costs. Not only does this degrade the economic competitiveness of nuclear energy, it results in unnecessary costs to the American electricity consumer. While addressing these concerns, this research project will be coordinated with current efforts of industry and NRC to develop risk-informed, performance-based regulations that affect the operation of the existing nuclear plants; however, this project will go further by focusing on the design of new plants.

  7. Integrating Safety Assessment Methods using the Risk Informed Safety Margins Characterization (RISMC) Approach

    SciTech Connect

    Curtis Smith; Diego Mandelli

    2013-03-01

    Safety is central to the design, licensing, operation, and economics of nuclear power plants (NPPs). As the current light water reactor (LWR) NPPs age beyond 60 years, there are possibilities for increased frequency of systems, structures, and components (SSC) degradations or failures that initiate safety significant events, reduce existing accident mitigation capabilities, or create new failure modes. Plant designers commonly “over-design” portions of NPPs and provide robustness in the form of redundant and diverse engineered safety features to ensure that, even in the case of well-beyond design basis scenarios, public health and safety will be protected with a very high degree of assurance. This form of defense-in-depth is a reasoned response to uncertainties and is often referred to generically as “safety margin.” Historically, specific safety margin provisions have been formulated primarily based on engineering judgment backed by a set of conservative engineering calculations. The ability to better characterize and quantify safety margin is important to improved decision making about LWR design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margin management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. In addition, as research and development (R&D) in the LWR Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant SSCs needs and opportunities to better optimize plant safety and performance will become known. To support decision making related to economics, readability, and safety, the RISMC Pathway provides methods and tools that enable mitigation options known as margins management strategies. The purpose of the RISMC Pathway R&D is to support plant decisions for risk-informed

  8. Nuclear Energy Research Initiative. Risk Informed Assessment of Regulatory and Design Requirements for Future Nuclear Power Plants. Annual Report

    SciTech Connect

    Ritterbusch, S.E.

    2000-08-01

    The overall goal of this research project is to support innovation in new nuclear power plant designs. This project is examining the implications, for future reactors and future safety regulation, of utilizing a new risk-informed regulatory system as a replacement for the current system. This innovation will be made possible through development of a scientific, highly risk-informed approach for the design and regulation of nuclear power plants. This approach will include the development and.lor confirmation of corresponding regulatory requirements and industry standards. The major impediment to long term competitiveness of new nuclear plants in the U.S. is the capital cost component--which may need to be reduced on the order of 35% to 40% for Advanced Light Water Reactors (ALWRs) such as System 80+ and Advanced Boiling Water Reactor (ABWR). The required cost reduction for an ALWR such as AP600 or AP1000 would be expected to be less. Such reductions in capital cost will require a fundamental reevaluation of the industry standards and regulatory bases under which nuclear plants are designed and licensed. Fortunately, there is now an increasing awareness that many of the existing regulatory requirements and industry standards are not significantly contributing to safety and reliability and, therefore, are unnecessarily adding to nuclear plant costs. Not only does this degrade the economic competitiveness of nuclear energy, it results in unnecessary costs to the American electricity consumer. While addressing these concerns, this research project will be coordinated with current efforts of industry and NRC to develop risk-informed, performance-based regulations that affect the operation of the existing nuclear plants; however, this project will go farther by focusing on the design of new plants.

  9. Cyber-Informed Engineering: The Need for a New Risk Informed and Design Methodology

    SciTech Connect

    Price, Joseph Daniel; Anderson, Robert Stephen

    2015-06-01

    Current engineering and risk management methodologies do not contain the foundational assumptions required to address the intelligent adversary’s capabilities in malevolent cyber attacks. Current methodologies focus on equipment failures or human error as initiating events for a hazard, while cyber attacks use the functionality of a trusted system to perform operations outside of the intended design and without the operator’s knowledge. These threats can by-pass or manipulate traditionally engineered safety barriers and present false information, invalidating the fundamental basis of a safety analysis. Cyber threats must be fundamentally analyzed from a completely new perspective where neither equipment nor human operation can be fully trusted. A new risk analysis and design methodology needs to be developed to address this rapidly evolving threatscape.

  10. Nine steps to risk-informed wellhead protection and management: Methods and application to the Burgberg Catchment

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Enzenhoefer, R.; Bunk, T.

    2013-12-01

    Wellhead protection zones are commonly delineated via advective travel time analysis without considering any aspects of model uncertainty. In the past decade, research efforts produced quantifiable risk-based safety margins for protection zones. They are based on well vulnerability criteria (e.g., travel times, exposure times, peak concentrations) cast into a probabilistic setting, i.e., they consider model and parameter uncertainty. Practitioners still refrain from applying these new techniques for mainly three reasons. (1) They fear the possibly cost-intensive additional areal demand of probabilistic safety margins, (2) probabilistic approaches are allegedly complex, not readily available, and consume huge computing resources, and (3) uncertainty bounds are fuzzy, whereas final decisions are binary. The primary goal of this study is to show that these reservations are unjustified. We present a straightforward and computationally affordable framework based on a novel combination of well-known tools (e.g., MODFLOW, PEST, Monte Carlo). This framework provides risk-informed decision support for robust and transparent wellhead delineation under uncertainty. Thus, probabilistic risk-informed wellhead protection is possible with methods readily available for practitioners. As vivid proof of concept, we illustrate our key points on a pumped karstic well catchment, located in Germany. In the case study, we show that reliability levels can be increased by re-allocating the existing delineated area at no increase in delineated area. This is achieved by simply swapping delineated low-risk areas against previously non-delineated high-risk areas. Also, we show that further improvements may often be available at only low additional delineation area. Depending on the context, increases or reductions of delineated area directly translate to costs and benefits, if the land is priced, or if land owners need to be compensated for land use restrictions.

  11. A pilot application of risk-informed methods to establish inservice inspection priorities for nuclear components at Surry Unit 1 Nuclear Power Station. Revision 1

    SciTech Connect

    Vo, T.V.; Phan, H.K.; Gore, B.F.; Simonen, F.A.; Doctor, S.R.

    1997-02-01

    As part of the Nondestructive Evaluation Reliability Program sponsored by the US Nuclear Regulatory Commission, the Pacific Northwest National Laboratory has developed risk-informed approaches for inservice inspection plans of nuclear power plants. This method uses probabilistic risk assessment (PRA) results to identify and prioritize the most risk-important components for inspection. The Surry Nuclear Power Station Unit 1 was selected for pilot application of this methodology. This report, which incorporates more recent plant-specific information and improved risk-informed methodology and tools, is Revision 1 of the earlier report (NUREG/CR-6181). The methodology discussed in the original report is no longer current and a preferred methodology is presented in this Revision. This report, NUREG/CR-6181, Rev. 1, therefore supersedes the earlier NUREG/CR-6181 published in August 1994. The specific systems addressed in this report are the auxiliary feedwater, the low-pressure injection, and the reactor coolant systems. The results provide a risk-informed ranking of components within these systems.

  12. Risk Informed Margins Management as part of Risk Informed Safety Margin Characterization

    SciTech Connect

    Curtis Smith

    2014-06-01

    The ability to better characterize and quantify safety margin is important to improved decision making about Light Water Reactor (LWR) design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margin management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. In addition, as research and development in the LWR Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant SSCs needs and opportunities to better optimize plant safety and performance will become known. To support decision making related to economics, readability, and safety, the Risk Informed Safety Margin Characterization (RISMC) Pathway provides methods and tools that enable mitigation options known as risk informed margins management (RIMM) strategies.

  13. Designing ROW Methods

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1996-01-01

    There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.

  14. Progress toward risk informed regulation

    SciTech Connect

    Rogers, K.C.

    1997-01-01

    For the last several years, the NRC, with encouragement from the industry, has been moving in the direction of risk informed regulation. This is consistent with the regulatory principle of efficiency, formally adopted by the Nuclear Regulatory Commission in 1991, which requires that regulatory activities be consistent with the degree of risk reduction they achieve. Probabilistic risk analysis has become the tool of choice for selecting the best of several alternatives. Closely related to risk informed regulation is the development of performance based rules. Such rules focus on the end result to be achieved. They do not specify the process, but instead establish the goals to be reached and how the achievement of those goals is to be judged. The inspection and enforcement activity is based on whether or not the goals have been met. The author goes on to offer comments on the history of the development of this process and its probable development in the future. He also addresses some issues which must be resolved or at least acknowledged. The success of risk informed regulation ultimately depends on having sufficiently reliable data to allow quantification of regulatory alternatives in terms of relative risk. Perhaps the area of human reliability and organizational performance has the greatest potential for improvement in reactor safety. The ability to model human performance is significantly less developed that the ability to model mechanical or electrical systems. The move toward risk informed, performance based regulation provides an unusual, perhaps unique, opportunity to establish a more rational, more effective basis for regulation.

  15. Communicating risk information and warnings

    USGS Publications Warehouse

    Mileti, D. S.

    1990-01-01

    Major advances have occurred over the last 20 years about how to effectively communicate risk information and warnings to the public. These lessons have been hard won. Knowledge has mounted on the finding from social scientific studies of risk communication failures, successes and those which fell somewhere in between. Moreover, the last 2 decades have borne witness to the brith, cultivation, and blossoming of information sharing between those physical scientists who discover new information about risk and those communcation scientists who trace its diffusion and then measure pbulic reaction. 

  16. Air Risk Information Support Center

    SciTech Connect

    Shoaf, C.R.; Guth, D.J.

    1990-12-31

    The Air Risk Information Support Center (Air RISC) was initiated in early 1988 by the US Environmental Protection Agency`s (EPA) Office of Health and Environmental Assessment (OHEA) and the Office of Air Quality Planning and Standards (OAQPS) as a technology transfer effort that would focus on providing information to state and local environmental agencies and to EPA Regional Offices in the areas of health, risk, and exposure assessment for toxic air pollutants. Technical information is fostered and disseminated by Air RISCs three primary activities: (1) a {open_quotes}hotline{close_quotes}, (2) quick turn-around technical assistance projects, and (3) general technical guidance projects. 1 ref., 2 figs.

  17. Control system design method

    DOEpatents

    Wilson, David G.; Robinett, III, Rush D.

    2012-02-21

    A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.

  18. Integrated risk information system (IRIS)

    SciTech Connect

    Tuxen, L.

    1990-12-31

    The Integrated Risk Information System (IRIS) is an electronic information system developed by the US Environmental Protection Agency (EPA) containing information related to health risk assessment. IRIS is the Agency`s primary vehicle for communication of chronic health hazard information that represents Agency consensus following comprehensive review by intra-Agency work groups. The original purpose for developing IRIS was to provide guidance to EPA personnel in making risk management decisions. This original purpose for developing IRIS was to guidance to EPA personnel in making risk management decisions. This role has expanded and evolved with wider access and use of the system. IRIS contains chemical-specific information in summary format for approximately 500 chemicals. IRIS is available to the general public on the National Library of Medicine`s Toxicology Data Network (TOXNET) and on diskettes through the National Technical Information Service (NTIS).

  19. NASA Risk-Informed Decision Making Handbook

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Stamatelatos, Michael; Maggio, Gaspare; Everett, Christopher; Youngblood, Robert; Rutledge, Peter; Benjamin, Allan; Williams, Rodney; Smith, Curtis; Guarro, Sergio

    2010-01-01

    This handbook provides guidance for conducting risk-informed decision making in the context of NASA risk management (RM), with a focus on the types of direction-setting key decisions that are characteristic of the NASA program and project life cycles, and which produce derived requirements in accordance with existing systems engineering practices that flow down through the NASA organizational hierarchy. The guidance in this handbook is not meant to be prescriptive. Instead, it is meant to be general enough, and contain a sufficient diversity of examples, to enable the reader to adapt the methods as needed to the particular decision problems that he or she faces. The handbook highlights major issues to consider when making decisions in the presence of potentially significant uncertainty, so that the user is better able to recognize and avoid pitfalls that might otherwise be experienced.

  20. Aircraft digital control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  1. RISK-INFORMED SAFETY MARGIN CHARACTERIZATION

    SciTech Connect

    Nam Dinh; Ronaldo Szilard

    2009-07-01

    The concept of safety margins has served as a fundamental principle in the design and operation of commercial nuclear power plants (NPPs). Defined as the minimum distance between a system’s “loading” and its “capacity”, plant design and operation is predicated on ensuring an adequate safety margin for safety-significant parameters (e.g., fuel cladding temperature, containment pressure, etc.) is provided over the spectrum of anticipated plant operating, transient and accident conditions. To meet the anticipated challenges associated with extending the operational lifetimes of the current fleet of operating NPPs, the United States Department of Energy (USDOE), the Idaho National Laboratory (INL) and the Electric Power Research Institute (EPRI) have developed a collaboration to conduct coordinated research to identify and address the technological challenges and opportunities that likely would affect the safe and economic operation of the existing NPP fleet over the postulated long-term time horizons. In this paper we describe a framework for developing and implementing a Risk-Informed Safety Margin Characterization (RISMC) approach to evaluate and manage changes in plant safety margins over long time horizons.

  2. Risk-Informed Assessment Methodology Development and Application

    SciTech Connect

    Sung Goo Chi; Seok Jeong Park; Chul Jin Choi; Ritterbusch, S.E.; Jacob, M.C.

    2002-07-01

    Westinghouse Electric Company (WEC) has been working with Korea Power Engineering Company (KOPEC) on a US Department of Energy (DOE) sponsored Nuclear Energy Research Initiative (NERI) project through a collaborative agreement established for the domestic NERI program. The project deals with Risk-Informed Assessment (RIA) of regulatory and design requirements of future nuclear power plants. An objective of the RIA project is to develop a risk-informed design process, which focuses on identifying and incorporating advanced features into future nuclear power plants (NPPs) that would meet risk goals in a cost-effective manner. The RIA design methodology is proposed to accomplish this objective. This paper discusses the development of this methodology and demonstrates its application in the design of plant systems for future NPPs. Advanced conceptual plant systems consisting of an advanced Emergency Core Cooling System (ECCS) and Emergency Feedwater System (EFWS) for a NPP were developed and the risk-informed design process was exercised to demonstrate the viability and feasibility of the RIA design methodology. Best estimate Loss-of-Coolant Accident (LOCA) analyses were performed to validate the PSA success criteria for the NPP. The results of the analyses show that the PSA success criteria can be met using the advanced conceptual systems and that the RIA design methodology is a viable and appropriate means of designing key features of risk-significant NPP systems. (authors)

  3. PRISM: a planned risk information seeking model.

    PubMed

    Kahlor, LeeAnn

    2010-06-01

    Recent attention on health-related information seeking has focused primarily on information seeking within specific health and health risk contexts. This study attempts to shift some of that focus to individual-level variables that may impact health risk information seeking across contexts. To locate these variables, the researcher posits an integrated model, the Planned Risk Information Seeking Model (PRISM). The model, which treats risk information seeking as a deliberate (planned) behavior, maps variables found in the Theory of Planned Behavior (TPB; Ajzen, 1991) and the Risk Information Seeking and Processing Model (RISP; Griffin, Dunwoody, & Neuwirth, 1999), and posits linkages among those variables. This effort is further informed by Kahlor's (2007) Augmented RISP, the Theory of Motivated Information Management (Afifi & Weiner, 2004), the Comprehensive Model of Information Seeking (Johnson & Meischke, 1993), the Health Information Acquisition Model (Freimuth, Stein, & Kean, 1989), and the Extended Parallel Processing Model (Witte, 1998). The resulting integrated model accounted for 59% of the variance in health risk information-seeking intent and performed better than the TPB or the RISP alone. PMID:20512716

  4. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  5. Risk-Informed Safety Assurance and Probabilistic Assessment of Mission-Critical Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Guarro, Sergio B.

    2010-01-01

    This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.

  6. Design method of supercavitating pumps

    NASA Astrophysics Data System (ADS)

    Kulagin, V.; Likhachev, D.; Li, F. C.

    2016-05-01

    The problem of effective supercavitating (SC) pump is solved, and optimum load distribution along the radius of the blade is found taking into account clearance, degree of cavitation development, influence of finite number of blades, and centrifugal forces. Sufficient accuracy can be obtained using the equivalent flat SC-grid for design of any SC-mechanisms, applying the “grid effect” coefficient and substituting the skewed flow calculated for grids of flat plates with the infinite attached cavitation caverns. This article gives the universal design method and provides an example of SC-pump design.

  7. Risk-informed Maintenance for Non-coherent Systems

    NASA Astrophysics Data System (ADS)

    Tao, Ye

    Probabilistic Safety Assessment (PSA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. The information provided by PSA has been increasingly implemented for regulatory purposes but rarely used in providing information for operation and maintenance activities. As one of the key parts in PSA, Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering and biological systems. The fault trees are composed of logic diagrams that display the state of the system and are constructed using graphical design techniques. Risk Importance Measures (RIMs) are information that can be obtained from both qualitative and quantitative aspects of FTA. Components within a system can be ranked with respect to each specific criterion defined by each RIM. Through a RIM, a ranking of the components or basic events can be obtained and provide valuable information for risk-informed decision making. Various RIMs have been applied in various applications. In order to provide a thorough understanding of RIMs and interpret the results, they are categorized with respect to risk significance (RS) and safety significance (SS) in this thesis. This has also tied them into different maintenance activities. When RIMs are used for maintenance purposes, it is called risk-informed maintenance. On the other hand, the majority of work produced on the FTA method has been concentrated on failure logic diagrams restricted to the direct or implied use of AND and OR operators. Such systems are considered as coherent systems. However, the NOT logic can also contribute to the information produced by PSA. The importance analysis of non-coherent systems is rather limited, even though the field has received more and more attention over the years. The non-coherent systems introduce difficulties in both qualitative and quantitative assessment of the fault tree compared with the coherent systems. In this thesis, a set

  8. Risk-Informed Decisions Optimization in Inspection and Maintenance

    SciTech Connect

    Robertas Alzbutas

    2002-07-01

    The Risk-Informed Approach (RIA) used to support decisions related to inspection and maintenance program is considered. The use of risk-informed methods can help focus the adequate in-service inspections and control on the more important locations of complex dynamic systems. The focus is set on the highest risk measured as conditional core damage frequency, which is produced by the frequencies of degradation and final failure at different locations combined with the conditional failure consequence probability. The probabilities of different degradation states per year and consequences are estimated quantitatively. The investigation of inspection and maintenance process is presented as the combination of deterministic and probabilistic analysis based on general risk-informed model, which includes the inspection and maintenance program features. Such RIA allows an optimization of inspection program while maintaining probabilistic and fundamental deterministic safety requirements. The failure statistics analysis is used as well as the evaluation of reliability of inspections. The assumptions regarding the effectiveness of the inspection methods are based on a classification of the accessibility of the welds during the inspection and on the different techniques used for inspection. The probability of defect detection is assumed to depend on the parameters either through logarithmic or logit transformation. As example the modeling of the pipe systems inspection process is analyzed. The means to reduce a number of inspection sites and the cumulative radiation exposure to the NPP inspection personnel with a reduction of overall risk is presented together with used and developed software. The developed software can perform and administrate all the risk evaluations and ensure the possibilities to compare different options and perform sensitivity analysis. The approaches to define an acceptable level of risk are discussed. These approaches with appropriate software in

  9. Consumer interpretation of ramipril and clopidogrel medication risk information – implications for risk communication strategies

    PubMed Central

    Tong, Vivien; Raynor, David K; Blalock, Susan J; Aslani, Parisa

    2015-01-01

    Purpose Side effects and side-effect risk information can be provided using written medicine information. However, challenges exist in effectively communicating this information to consumers. This study aimed to explore broad consumer profiles relevant to ramipril and clopidogrel side-effect risk information interpretation. Methods Three focus groups were conducted (n=18 consumers) exploring consumer perspectives, understanding and treatment decision making in response to ramipril and clopidogrel written medicine information leaflets containing side effects and side-effect risk information. All discussions were audio recorded, transcribed verbatim, and analyzed to explore consumer profiles pertaining to side-effect risk appraisal. Results Three consumer profiles emerged: glass half-empty, glass half-full, and middle-of-the-road consumers, highlighting the influence of perceived individual susceptibility, interpretation of side-effect risk information, and interindividual differences, on consumers’ understanding of side-effect risk information. All profiles emphasized the importance of gaining an understanding of individual side-effect risk when taking medicines. Conclusion Written side-effect risk information is not interpreted uniformly by consumers. Consumers formulated their own construct of individual susceptibility to side effects. Health care professionals should consider how consumers interpret side-effect risk information and its impact on medication use. Existing risk communication strategies should be evaluated in light of these profiles to determine their effectiveness in conveying information. PMID:26185427

  10. A historical perspective of risk-informed regulation

    SciTech Connect

    Campbell, P.L.

    1996-12-01

    In Federal studies, the process of using risk information is described as having two general components: (1) risk assessment - the application of credible scientific principles and statistical methods to develop estimates of the likely effects of natural phenomena and human factors and the characterization of these estimates in a form appropriate for the intended audience (e.g., agency decisionmakers, public); and (2) risk management - the process of weighing policy alternatives and selecting the most appropriate regulatory action, integrating the results of risk assessment with engineering data with social, economic, and political concerns to reach a decision. This paper discusses largely the second component.

  11. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  12. RISK-INFORMED BALANCING OF SAFETY, NONPROLIFERATION, AND ECONOMICS FOR THE SFR

    SciTech Connect

    Apostolakis, George; Driscoll, Michael; Golay, Michael; Kadak, Andrew; Todreas, Neil; Aldmir, Tunc; Denning, Richard; Lineberry, Michael

    2011-10-20

    A substantial barrier to the implementation of Sodium-cooled Fast Reactor (SFR) technology in the short term is the perception that they would not be economically competitive with advanced light water reactors. With increased acceptance of risk-informed regulation, the opportunity exists to reduce the costs of a nuclear power plant at the design stage without applying excessive conservatism that is not needed in treating low risk events. In the report, NUREG-1860, the U.S. Nuclear Regulatory Commission describes developmental activities associated with a risk-informed, scenario-based technology neutral framework (TNF) for regulation. It provides quantitative yardsticks against which the adequacy of safety risks can be judged. We extend these concepts to treatment of proliferation risks. The objective of our project is to develop a risk-informed design process for minimizing the cost of electricity generation within constraints of adequate safety and proliferation risks. This report describes the design and use of this design optimization process within the context of reducing the capital cost and levelized cost of electricity production for a small (possibly modular) SFR. Our project provides not only an evaluation of the feasibility of a risk-informed design process but also a practical test of the applicability of the TNF to an actual advanced, non-LWR design. The report provides results of five safety related and one proliferation related case studies of innovative design alternatives. Applied to previously proposed SFR nuclear energy system concepts We find that the TNF provides a feasible initial basis for licensing new reactors. However, it is incomplete. We recommend improvements in terms of requiring acceptance standards for total safety risks, and we propose a framework for regulation of proliferation risks. We also demonstrate methods for evaluation of proliferation risks. We also suggest revisions to scenario-specific safety risk acceptance standards

  13. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. )

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  14. Spacesuit Radiation Shield Design Methods

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Anderson, Brooke M.; Cucinotta, Francis A.; Ware, J.; Zeitlin, Cary J.

    2006-01-01

    Meeting radiation protection requirements during EVA is predominantly an operational issue with some potential considerations for temporary shelter. The issue of spacesuit shielding is mainly guided by the potential of accidental exposure when operational and temporary shelter considerations fail to maintain exposures within operational limits. In this case, very high exposure levels are possible which could result in observable health effects and even be life threatening. Under these assumptions, potential spacesuit radiation exposures have been studied using known historical solar particle events to gain insight on the usefulness of modification of spacesuit design in which the control of skin exposure is a critical design issue and reduction of blood forming organ exposure is desirable. Transition to a new spacesuit design including soft upper-torso and reconfigured life support hardware gives an opportunity to optimize the next generation spacesuit for reduced potential health effects during an accidental exposure.

  15. Communicating Cancer Risk Information: The Challenges of Uncertainty.

    ERIC Educational Resources Information Center

    Bottorff, Joan L.; Ratner, Pamela A.; Johnson, Joy L.; Lovato, Chris Y.; Joab, S. Amanda

    1998-01-01

    Accurate and sensitive communication of cancer-risk information is important. Based on a literature review of 75 research reports, expert opinion papers, and clinical protocols, a synthesis of what is known about the communication of cancer-risk information is presented. Relevance of information to those not tested is discussed. (Author/EMK)

  16. Fatalistic responses to different types of genetic risk information: exploring the role of self-malleability.

    PubMed

    Claassen, Liesbeth; Henneman, Lidewij; De Vet, Riekie; Knol, Dirk; Marteau, Theresa; Timmermans, Danielle

    2010-02-01

    Providing people with genetic risk information may induce a sense of fatalism, the belief that little can be done to reduce the risk. We postulated that fatalism is a function of health risk information and individual differences in self-perception. DNA-based risk information was hypothesised to generate more fatalism than risk information based on family history or non-genetic risk information. Moreover, people who view themselves as more rather than less able to change self-attributes were hypothesised to respond least fatalistically. Factor analyses in separate samples were used to construct a five-item 'Malleability of self' measure. Predictive validity of the measure was tested using a within-subjects analogue design. Participants responded to three scenario vignettes in which they were informed of an increased risk of cardiovascular disease (CVD). In Scenario 1, risk was ascertained by DNA testing, family history and cholesterol testing; in Scenario 2, it was ascertained by family history and cholesterol testing; in Scenario 3, risk was ascertained by cholesterol testing alone. Scenario 1 was associated with least perceived control over cholesterol level and CVD risk. People who viewed themselves as more able to change self-attributes experienced more control in all three scenarios.

  17. Design Methods for Clinical Systems

    PubMed Central

    Blum, B.I.

    1986-01-01

    This paper presents a brief introduction to the techniques, methods and tools used to implement clinical systems. It begins with a taxonomy of software systems, describes the classic approach to development, provides some guidelines for the planning and management of software projects, and finishes with a guide to further reading. The conclusions are that there is no single right way to develop software, that most decisions are based upon judgment built from experience, and that there are tools that can automate some of the better understood tasks.

  18. Analyses to support development of risk-informed separation distances for hydrogen codes and standards.

    SciTech Connect

    LaChance, Jeffrey L.; Houf, William G.; Fluer, Inc., Paso Robels, CA; Fluer, Larry; Middleton, Bobby

    2009-03-01

    The development of a set of safety codes and standards for hydrogen facilities is necessary to ensure they are designed and operated safely. To help ensure that a hydrogen facility meets an acceptable level of risk, code and standard development organizations are tilizing risk-informed concepts in developing hydrogen codes and standards.

  19. Should the model for risk-informed regulation be game theory rather than decision theory?

    PubMed

    Bier, Vicki M; Lin, Shi-Woei

    2013-02-01

    Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision-theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity-and often also the motive-to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two-player, two-stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game-theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including

  20. Mixed Method Designs in Implementation Research

    PubMed Central

    Aarons, Gregory A.; Horwitz, Sarah; Chamberlain, Patricia; Hurlburt, Michael; Landsverk, John

    2010-01-01

    This paper describes the application of mixed method designs in implementation research in 22 mental health services research studies published in peer-reviewed journals over the last 5 years. Our analyses revealed 7 different structural arrangements of qualitative and quantitative methods, 5 different functions of mixed methods, and 3 different ways of linking quantitative and qualitative data together. Complexity of design was associated with number of aims or objectives, study context, and phase of implementation examined. The findings provide suggestions for the use of mixed method designs in implementation research. PMID:20967495

  1. Culture, Interface Design, and Design Methods for Mobile Devices

    NASA Astrophysics Data System (ADS)

    Lee, Kun-Pyo

    Aesthetic differences and similarities among cultures are obviously one of the very important issues in cultural design. However, ever since products became knowledge-supporting tools, the visible elements of products have become more universal so that the invisible parts of products such as interface and interaction are getting more important. Therefore, the cultural design should be extended to the invisible elements of culture like people's conceptual models beyond material and phenomenal culture. This chapter aims to explain how we address the invisible cultural elements in interface design and design methods by exploring the users' cognitive styles and communication patterns in different cultures. Regarding cultural interface design, we examined users' conceptual models while interacting with mobile phone and website interfaces, and observed cultural difference in performing tasks and viewing patterns, which appeared to agree with cultural cognitive styles known as Holistic thoughts vs. Analytic thoughts. Regarding design methods for culture, we explored how to localize design methods such as focus group interview and generative session for specific cultural groups, and the results of comparative experiments revealed cultural difference on participants' behaviors and performance in each design method and led us to suggest how to conduct them in East Asian culture. Mobile Observation Analyzer and Wi-Pro, user research tools we invented to capture user behaviors and needs especially in their mobile context, were also introduced.

  2. Climate Risk Informed Decision Analysis (CRIDA): A novel practical guidance for Climate Resilient Investments and Planning

    NASA Astrophysics Data System (ADS)

    Jeuken, Ad; Mendoza, Guillermo; Matthews, John; Ray, Patrick; Haasnoot, Marjolijn; Gilroy, Kristin; Olsen, Rolf; Kucharski, John; Stakhiv, Gene; Cushing, Janet; Brown, Casey

    2016-04-01

    over time. They are part of the Dutch adaptive planning approach Adaptive Delta Management, executed and develop by the Dutch Delta program. Both decision scaling and adaptation pathways have been piloted in studies worldwide. The objective of CRIDA is to mainstream effective climate adaptation for professional water managers. The CRIDA publication, due in april 2016, follows the generic water design planning design cycle. At each step, CRIDA describes stepwise guidance for incorporating climate robustness: problem definition, stress test, alternatives formulation and recommendation, evaluation and selection. In the presentation the origin, goal, steps and practical tools available at each step of CRIDA will be explained. In two other abstracts ("Climate Risk Informed Decision Analysis: A Hypothetical Application to the Waas Region" by Gilroy et al., "The Application of Climate Risk Informed Decision Analysis to the Ioland Water Treatment Plant in Lusaka, Zambia, by Kucharski et al.), the application of CRIDA to cases is explained

  3. Model reduction methods for control design

    NASA Technical Reports Server (NTRS)

    Dunipace, K. R.

    1988-01-01

    Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.

  4. Mixed Methods Research Designs in Counseling Psychology

    ERIC Educational Resources Information Center

    Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.

    2005-01-01

    With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…

  5. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  6. Micarta propellers IV : technical methods of design

    NASA Technical Reports Server (NTRS)

    Caldwell, F W; Clay, N S

    1924-01-01

    A description is given of the methods used in design of Micarta propellers. The most direct method for working out the design of a Micarta propeller is to start with the diameter and blade angles of a wooden propeller suited for a particular installation and then to apply one of the plan forms suitable for Micarta propellers. This allows one to obtain the corresponding blade widths and to then use these angles and blade widths for an aerodynamic analysis.

  7. Risk-informed inservice test activities at the NRC

    SciTech Connect

    Fischer, D.; Cheok, M.; Hsia, A.

    1996-12-01

    The operational readiness of certain safety-related components is vital to the safe operation of nuclear power plants. Inservice testing (IST) is one of the mechanisms used by licensees to ensure this readiness. In the past, the type and frequency of IST have been based on the collective best judgment of the NRC and industry in an ASME Code consensus process and NRC rulemaking process. Furthermore, IST requirements have not explicitly considered unique component and system designs and contribution to overall plant risk. Because of the general nature of ASME Code test requirements and non-reliance on risk estimates, current IST requirements may not adequately emphasize testing those components that are most important to safety and may overly emphasize testing of less safety significant components. Nuclear power plant licensees are currently interested in optimizing testing by applying resources in more safety significant areas and, where appropriate, reducing measures in less safety-significant areas. They are interested in maintaining system availability and reducing overall maintenance costs in ways that do not adversely affect safety. The NRC has been interested in using probabilistic, as an adjunct to deterministic, techniques to help define the scope, type and frequency of IST. The development of risk-informed IST programs has the potential to optimize the use of NRC and industry resources without adverse affect on safety.

  8. Development of a hydraulic turbine design method

    NASA Astrophysics Data System (ADS)

    Kassanos, Ioannis; Anagnostopoulos, John; Papantonis, Dimitris

    2013-10-01

    In this paper a hydraulic turbine parametric design method is presented which is based on the combination of traditional methods and parametric surface modeling techniques. The blade of the turbine runner is described using Bezier surfaces for the definition of the meridional plane as well as the blade angle distribution, and a thickness distribution applied normal to the mean blade surface. In this way, it is possible to define parametrically the whole runner using a relatively small number of design parameters, compared to conventional methods. The above definition is then combined with a commercial CFD software and a stochastic optimization algorithm towards the development of an automated design optimization procedure. The process is demonstrated with the design of a Francis turbine runner.

  9. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  10. ESD protection device design using statistical methods

    NASA Astrophysics Data System (ADS)

    Shigyo, N.; Kawashima, H.; Yasuda, S.

    2002-12-01

    This paper describes a design of the electrostatic discharge (ESD) protection device to minimize its area Ap while maintaining the breakdown voltage VESD. Hypothesis tests using measured data were performed to find the severest applied serge condition and to select control factors for the design-of-experiments (DOE). Also, technology CAD (TCAD) was used to estimate VESD. An optimum device structure, where salicide block was employed, was found using statistical methods and TCAD.

  11. Analysis Method for Quantifying Vehicle Design Goals

    NASA Technical Reports Server (NTRS)

    Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael

    2007-01-01

    A document discusses a method for using Design Structure Matrices (DSM), coupled with high-level tools representing important life-cycle parameters, to comprehensively conceptualize a flight/ground space transportation system design by dealing with such variables as performance, up-front costs, downstream operations costs, and reliability. This approach also weighs operational approaches based on their effect on upstream design variables so that it is possible to readily, yet defensively, establish linkages between operations and these upstream variables. To avoid the large range of problems that have defeated previous methods of dealing with the complex problems of transportation design, and to cut down the inefficient use of resources, the method described in the document identifies those areas that are of sufficient promise and that provide a higher grade of analysis for those issues, as well as the linkages at issue between operations and other factors. Ultimately, the system is designed to save resources and time, and allows for the evolution of operable space transportation system technology, and design and conceptual system approach targets.

  12. Axisymmetric inlet minimum weight design method

    NASA Technical Reports Server (NTRS)

    Nadell, Shari-Beth

    1995-01-01

    An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.

  13. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  14. Colorectal cancer risk information presented by a nonphysician assistant does not increase screening rates

    PubMed Central

    Wilkins, Thad; Gillies, Ralph A.; Panchal, Pina; Patel, Mittal; Warren, Peter; Schade, Robert R.

    2014-01-01

    Abstract Objective To determine the effectiveness of presenting individualized colorectal cancer (CRC) risk information for increasing CRC screening rates in primary care patients at above-average risk of CRC. Design Randomized controlled trial. Setting Georgia Regents University in Augusta—an academic family medicine clinic in the southeastern United States. Participants Outpatients (50 to 70 years of age) scheduled for routine visits in the family medicine clinic who were determined to be at above-average risk of CRC. Interventions Individualized CRC risk information calculated from the Your Disease Risk tool compared with a standard CRC screening handout. Main outcome measures Intention to complete CRC screening. Secondary measures included the proportions of subjects completing fecal occult blood tests, flexible sigmoidoscopy, and colonoscopy. Results A total of 1147 consecutive records were reviewed to determine eligibility. Overall, 210 (37.7%) of 557 eligible participants were randomized to receive either individualized CRC risk information (prepared by a research assistant) or a standard CRC screening handout. The intervention group had a mean (SD) age of 55.7 (4.8) years and the control group had a mean (SD) age of 55.6 (4.6) years. Two-thirds of the participants in each group were female. The intervention group and the control group were matched by race (P = .40). There was no significant difference between groups for intention to complete CRC screening (P = .58). Overall, 26.7% of the intervention participants and 27.7% of the control participants completed 1 or more CRC screening tests (P = .66). Conclusion Presentation of individualized CRC risk information by a nonphysician assistant as a decision aid did not result in higher CRC screening rates in primary care patients compared with presentation of general CRC screening information. Future research is needed to determine if physician presentation of CRC risk information would result in increased

  15. Effects of baseline risk information on social and individual choices.

    PubMed

    Gyrd-Hansen, Dorte; Kristiansen, Ivar Sønbø; Nexøe, Jørgen; Nielsen, Jesper Bo

    2002-01-01

    This article analyzes preferences for risk reductions in the context of individual and societal decision making. The effect of information on baseline risk is analyzed in both contexts. The results indicate that if individuals are to imagine that they suffer from 1 low-risk and 1 high-risk ailment, and are offered a specified identical absolute risk reduction, a majority will ceteris paribus opt for treatment of the low-risk ailment. A different preference structure is elicited when priority questions are framed as social choices. Here, a majority will prefer to treat the high-risk group of patients. The preference reversal demonstrates the extent to which baseline risk information can influence preferences in different choice settings. It is argued that presentation of baseline risk information may induce framing effects that lead to nonoptimal resource allocations. A solution to this problem may be to not present group-specific baseline risk information when eliciting preferences. PMID:11833667

  16. Standardized Radiation Shield Design Methods: 2005 HZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.

    2006-01-01

    Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.

  17. MAST Propellant and Delivery System Design Methods

    NASA Technical Reports Server (NTRS)

    Nadeem, Uzair; Mc Cleskey, Carey M.

    2015-01-01

    A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.

  18. New method of designing CCD driver

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Yu, Daoyin; Zhang, Yimo

    1993-04-01

    A new method of designing CCD driver circuits is introduced in this paper. Some kinds of programmable logic device (PLD) chips including generic array logic (GAL) and EPROM are used to drive a CCD sensor. The driver runs stably and reliably. It is widely applied in many fields with its good interchangeability, small size, and low cost.

  19. Approaches to cancer assessment in EPA's Integrated Risk Information System

    SciTech Connect

    Gehlhaus, Martin W.; Gift, Jeffrey S.; Hogan, Karen A.; Kopylev, Leonid; Schlosser, Paul M.; Kadry, Abdel-Razak

    2011-07-15

    The U.S. Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) Program develops assessments of health effects that may result from chronic exposure to chemicals in the environment. The IRIS database contains more than 540 assessments. When supported by available data, IRIS assessments provide quantitative analyses of carcinogenic effects. Since publication of EPA's 2005 Guidelines for Carcinogen Risk Assessment, IRIS cancer assessments have implemented new approaches recommended in these guidelines and expanded the use of complex scientific methods to perform quantitative dose-response assessments. Two case studies of the application of the mode of action framework from the 2005 Cancer Guidelines are presented in this paper. The first is a case study of 1,2,3-trichloropropane, as an example of a chemical with a mutagenic mode of carcinogenic action thus warranting the application of age-dependent adjustment factors for early-life exposure; the second is a case study of ethylene glycol monobutyl ether, as an example of a chemical with a carcinogenic action consistent with a nonlinear extrapolation approach. The use of physiologically based pharmacokinetic (PBPK) modeling to quantify interindividual variability and account for human parameter uncertainty as part of a quantitative cancer assessment is illustrated using a case study involving probabilistic PBPK modeling for dichloromethane. We also discuss statistical issues in assessing trends and model fit for tumor dose-response data, analysis of the combined risk from multiple types of tumors, and application of life-table methods for using human data to derive cancer risk estimates. These issues reflect the complexity and challenges faced in assessing the carcinogenic risks from exposure to environmental chemicals, and provide a view of the current trends in IRIS carcinogenicity risk assessment.

  20. Approaches to cancer assessment in EPA's Integrated Risk Information System.

    PubMed

    Gehlhaus, Martin W; Gift, Jeffrey S; Hogan, Karen A; Kopylev, Leonid; Schlosser, Paul M; Kadry, Abdel-Razak

    2011-07-15

    The U.S. Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) Program develops assessments of health effects that may result from chronic exposure to chemicals in the environment. The IRIS database contains more than 540 assessments. When supported by available data, IRIS assessments provide quantitative analyses of carcinogenic effects. Since publication of EPA's 2005 Guidelines for Carcinogen Risk Assessment, IRIS cancer assessments have implemented new approaches recommended in these guidelines and expanded the use of complex scientific methods to perform quantitative dose-response assessments. Two case studies of the application of the mode of action framework from the 2005 Cancer Guidelines are presented in this paper. The first is a case study of 1,2,3-trichloropropane, as an example of a chemical with a mutagenic mode of carcinogenic action thus warranting the application of age-dependent adjustment factors for early-life exposure; the second is a case study of ethylene glycol monobutyl ether, as an example of a chemical with a carcinogenic action consistent with a nonlinear extrapolation approach. The use of physiologically based pharmacokinetic (PBPK) modeling to quantify interindividual variability and account for human parameter uncertainty as part of a quantitative cancer assessment is illustrated using a case study involving probabilistic PBPK modeling for dichloromethane. We also discuss statistical issues in assessing trends and model fit for tumor dose-response data, analysis of the combined risk from multiple types of tumors, and application of life-table methods for using human data to derive cancer risk estimates. These issues reflect the complexity and challenges faced in assessing the carcinogenic risks from exposure to environmental chemicals, and provide a view of the current trends in IRIS carcinogenicity risk assessment.

  1. Acoustic Treatment Design Scaling Methods. Phase 2

    NASA Technical Reports Server (NTRS)

    Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.

    2003-01-01

    The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.

  2. 3. 6 simplified methods for design

    SciTech Connect

    Nickell, R.E.; Yahr, G.T.

    1981-01-01

    Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed.

  3. Reliability Methods for Shield Design Process

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.

    2002-01-01

    Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.

  4. Optimization methods for alternative energy system design

    NASA Astrophysics Data System (ADS)

    Reinhardt, Michael Henry

    An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study

  5. Risk-informed radioactive waste classification and reclassification.

    PubMed

    Croff, Allen G

    2006-11-01

    Radioactive waste classification systems have been developed to allow wastes having similar hazards to be grouped for purposes of storage, treatment, packaging, transportation, and/or disposal. As recommended in the National Council on Radiation Protection and Measurements' Report No. 139, Risk-Based Classification of Radioactive and Hazardous Chemical Wastes, a preferred classification system would be based primarily on the health risks to the public that arise from waste disposal and secondarily on other attributes such as the near-term practicalities of managing a waste, i.e., the waste classification system would be risk informed. The current U.S. radioactive waste classification system is not risk informed because key definitions--especially that of high-level waste--are based on the source of the waste instead of its inherent characteristics related to risk. A second important reason for concluding the existing U.S. radioactive waste classification system is not risk informed is there are no general principles or provisions for exempting materials from being classified as radioactive waste which would then allow management without regard to its radioactivity. This paper elaborates the current system for classifying and reclassifying radioactive wastes in the United States, analyzes the extent to which the system is risk informed and the ramifications of its not being so, and provides observations on potential future direction of efforts to address shortcomings in the U.S. radioactive waste classification system as of 2004.

  6. A risk-informed approach to safety margins analysis

    SciTech Connect

    Curtis Smith; Diego Mandelli

    2013-07-01

    The Risk Informed Safety Margins Characterization (RISMC) Pathway is a systematic approach developed to characterize and quantify safety margins of nuclear power plant structures, systems and components. The model has been tested on the Advanced Test Reactor (ATR) at Idaho National Lab.

  7. Risk-informed radioactive waste classification and reclassification.

    PubMed

    Croff, Allen G

    2006-11-01

    Radioactive waste classification systems have been developed to allow wastes having similar hazards to be grouped for purposes of storage, treatment, packaging, transportation, and/or disposal. As recommended in the National Council on Radiation Protection and Measurements' Report No. 139, Risk-Based Classification of Radioactive and Hazardous Chemical Wastes, a preferred classification system would be based primarily on the health risks to the public that arise from waste disposal and secondarily on other attributes such as the near-term practicalities of managing a waste, i.e., the waste classification system would be risk informed. The current U.S. radioactive waste classification system is not risk informed because key definitions--especially that of high-level waste--are based on the source of the waste instead of its inherent characteristics related to risk. A second important reason for concluding the existing U.S. radioactive waste classification system is not risk informed is there are no general principles or provisions for exempting materials from being classified as radioactive waste which would then allow management without regard to its radioactivity. This paper elaborates the current system for classifying and reclassifying radioactive wastes in the United States, analyzes the extent to which the system is risk informed and the ramifications of its not being so, and provides observations on potential future direction of efforts to address shortcomings in the U.S. radioactive waste classification system as of 2004. PMID:17033455

  8. Waterflooding injectate design systems and methods

    SciTech Connect

    Brady, Patrick V.; Krumhansl, James L.

    2014-08-19

    A method of designing an injectate to be used in a waterflooding operation is disclosed. One aspect includes specifying data representative of chemical characteristics of a liquid hydrocarbon, a connate, and a reservoir rock, of a subterranean reservoir. Charged species at an interface of the liquid hydrocarbon are determined based on the specified data by evaluating at least one chemical reaction. Charged species at an interface of the reservoir rock are determined based on the specified data by evaluating at least one chemical reaction. An extent of surface complexation between the charged species at the interfaces of the liquid hydrocarbon and the reservoir rock is determined by evaluating at least one surface complexation reaction. The injectate is designed and is operable to decrease the extent of surface complexation between the charged species at interfaces of the liquid hydrocarbon and the reservoir rock. Other methods, apparatus, and systems are disclosed.

  9. An improved design method for EPC middleware

    NASA Astrophysics Data System (ADS)

    Lou, Guohuan; Xu, Ran; Yang, Chunming

    2014-04-01

    For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.

  10. Design methods of rhombic tensegrity structures

    NASA Astrophysics Data System (ADS)

    Feng, Xi-Qiao; Li, Yue; Cao, Yan-Ping; Yu, Shou-Wen; Gu, Yuan-Tong

    2010-08-01

    As a special type of novel flexible structures, tensegrity holds promise for many potential applications in such fields as materials science, biomechanics, civil and aerospace engineering. Rhombic systems are an important class of tensegrity structures, in which each bar constitutes the longest diagonal of a rhombus of four strings. In this paper, we address the design methods of rhombic structures based on the idea that many tensegrity structures can be constructed by assembling one-bar elementary cells. By analyzing the properties of rhombic cells, we first develop two novel schemes, namely, direct enumeration scheme and cell-substitution scheme. In addition, a facile and efficient method is presented to integrate several rhombic systems into a larger tensegrity structure. To illustrate the applications of these methods, some novel rhombic tensegrity structures are constructed.

  11. Method of designing layered sound absorbing materials

    NASA Astrophysics Data System (ADS)

    Atalla, Youssef; Panneton, Raymond

    2002-11-01

    A widely used model for describing sound propagation in porous materials is the Johnson-Champoux-Allard model. This rigid frame model is based on five geometrical properties of the porous medium: resistivity, porosity, tortuosity, and viscous and thermal characteristic lengths. Using this model and with the knowledge of such properties for different absorbing materials, the design of a multiple layered system can be optimized efficiently and rapidly. The overall impedance of the layered systems can be calculated by the repeated application of single layer impedance equation. The knowledge of the properties of the materials involved in the layered system and their physical meaning, allows to perform by computer a systematic evaluation of potential layer combinations rather than do it experimentally which is time consuming and always not efficient. The final design of layered materials can then be confirmed by suitable measurements. A method of designing the overall acoustic absorption of multiple layered porous materials is presented. Some aspects based on the material properties, for designing a flat layered absorbing system are considered. Good agreement between measured and computed sound absorption coefficients has been obtained for the studied configurations. [Work supported by N.S.E.R.C. Canada, F.C.A.R. Quebec, and Bombardier Aerospace.

  12. Methods for structural design at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Ellison, A. M.; Jones, W. E., Jr.; Leimbach, K. R.

    1973-01-01

    A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range.

  13. Direct optimization method for reentry trajectory design

    NASA Astrophysics Data System (ADS)

    Jallade, S.; Huber, P.; Potti, J.; Dutruel-Lecohier, G.

    The software package called `Reentry and Atmospheric Transfer Trajectory' (RATT) was developed under ESA contract for the design of atmospheric trajectories. It includes four software TOP (Trajectory OPtimization) programs, which optimize reentry and aeroassisted transfer trajectories. 6FD and 3FD (6 and 3 degrees of freedom Flight Dynamic) are devoted to the simulation of the trajectory. SCA (Sensitivity and Covariance Analysis) performs covariance analysis on a given trajectory with respect to different uncertainties and error sources. TOP provides the optimum guidance law of a three degree of freedom reentry of aeroassisted transfer (AAOT) trajectories. Deorbit and reorbit impulses (if necessary) can be taken into account in the optimization. A wide choice of cost function is available to the user such as the integrated heat flux, or the sum of the velocity impulses, or a linear combination of both of them for trajectory and vehicle design. The crossrange and the downrange can be maximized during reentry trajectory. Path constraints are available on the load factor, the heat flux and the dynamic pressure. Results on these proposed options are presented. TOPPHY is the part of the TOP software corresponding to the definition and the computation of the optimization problemphysics. TOPPHY can interface with several optimizes with dynamic solvers: TOPOP and TROPIC using direct collocation methods and PROMIS using direct multiple shooting method. TOPOP was developed in the frame of this contract, it uses Hermite polynomials for the collocation method and the NPSOL optimizer from the NAG library. Both TROPIC and PROMIS were developed by the DLR (Deutsche Forschungsanstalt fuer Luft und Raumfahrt) and use the SLSQP optimizer. For the dynamic equation resolution, TROPIC uses a collocation method with Splines and PROMIS uses a multiple shooting method with finite differences. The three different optimizers including dynamics were tested on the reentry trajectory of the

  14. Key Attributes of the SAPHIRE Risk and Reliability Analysis Software for Risk-Informed Probabilistic Applications

    SciTech Connect

    Curtis Smith; James Knudsen; Kellie Kvarfordt; Ted Wood

    2008-08-01

    The Idaho National Laboratory is a primary developer of probabilistic risk and reliability analysis (PRRA) tools, dating back over 35 years. Evolving from mainframe-based software, the current state-of-the-practice has lead to the creation of the SAPHIRE software. Currently, agencies such as the Nuclear Regulatory Commission, the National Aeronautics and Aerospace Agency, the Department of Energy, and the Department of Defense use version 7 of the SAPHIRE software for many of their risk-informed activities. In order to better understand and appreciate the power of software as part of risk-informed applications, we need to recall that our current analysis methods and solution methods have built upon pioneering work done 30 to 40 years ago. We contrast this work with the current capabilities in the SAPHIRE analysis package. As part of this discussion, we provide information for both the typical features and special analysis capabilities which are available. We also present the application and results typically found with state-of-the-practice PRRA models. By providing both a high-level and detailed look at the SAPHIRE software, we give a snapshot in time for the current use of software tools in a risk-informed decision arena.

  15. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  16. Needs for Risk Informing Environmental Cleanup Decision Making - 13613

    SciTech Connect

    Zhu, Ming; Moorer, Richard

    2013-07-01

    This paper discusses the needs for risk informing decision making by the U.S. Department of Energy (DOE) Office of Environmental Management (EM). The mission of the DOE EM is to complete the safe cleanup of the environmental legacy brought about from the nation's five decades of nuclear weapons development and production and nuclear energy research. This work represents some of the most technically challenging and complex cleanup efforts in the world and is projected to require the investment of billions of dollars and several decades to complete. Quantitative assessments of health and environmental risks play an important role in work prioritization and cleanup decisions of these challenging environmental cleanup and closure projects. The risk assessments often involve evaluation of performance of integrated engineered barriers and natural systems over a period of hundreds to thousands of years, when subject to complex geo-environmental transformation processes resulting from remediation and disposal actions. The requirement of resource investments for the cleanup efforts and the associated technical challenges have subjected the EM program to continuous scrutiny by oversight entities. Recent DOE reviews recommended application of a risk-informed approach throughout the EM complex for improved targeting of resources. The idea behind this recommendation is that by using risk-informed approaches to prioritize work scope, the available resources can be best utilized to reduce environmental and health risks across the EM complex, while maintaining the momentum of the overall EM cleanup program at a sustainable level. In response to these recommendations, EM is re-examining its work portfolio and key decision making with risk insights for the major sites. This paper summarizes the review findings and recommendations from the DOE internal reviews, discusses the needs for risk informing the EM portfolio and makes an attempt to identify topics for R and D in integrated

  17. A structural design decomposition method utilizing substructuring

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1994-01-01

    A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.

  18. Research and Design of Rootkit Detection Method

    NASA Astrophysics Data System (ADS)

    Liu, Leian; Yin, Zuanxing; Shen, Yuli; Lin, Haitao; Wang, Hongjiang

    Rootkit is one of the most important issues of network communication systems, which is related to the security and privacy of Internet users. Because of the existence of the back door of the operating system, a hacker can use rootkit to attack and invade other people's computers and thus he can capture passwords and message traffic to and from these computers easily. With the development of the rootkit technology, its applications are more and more extensive and it becomes increasingly difficult to detect it. In addition, for various reasons such as trade secrets, being difficult to be developed, and so on, the rootkit detection technology information and effective tools are still relatively scarce. In this paper, based on the in-depth analysis of the rootkit detection technology, a new kind of the rootkit detection structure is designed and a new method (software), X-Anti, is proposed. Test results show that software designed based on structure proposed is much more efficient than any other rootkit detection software.

  19. Method for designing gas tag compositions

    DOEpatents

    Gross, K.C.

    1995-04-11

    For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node No. 1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node No. 2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred. 5 figures.

  20. Design Process Guide Method for Minimizing Loops and Conflicts

    NASA Astrophysics Data System (ADS)

    Koga, Tsuyoshi; Aoyama, Kazuhiro

    We propose a new guide method for developing an easy-to-design process for product development. This process ensures a smaller number of wasteful iterations and less multiple conflicts. The design process is modeled as a sequence of design decisions. A design decision is defined as the process of determination of product attributes. A design task is represented as a calculation flow that depends on the product constraints between the product attributes. We also propose an automatic planning algorithm for the execution of the design task, in order to minimize the design loops and design conflicts. Further, we validate the effectiveness of the proposed guide method by developing a prototype design system and a design example of piping for a power steering system. We find that the proposed method can successfully minimize design loops and design conflicts. This paper addresses (1) a design loop model, (2) a design conflict model, and (3) how to minimize design loops and design conflicts.

  1. Adjoint methods for aerodynamic wing design

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1993-01-01

    A model inverse design problem is used to investigate the effect of flow discontinuities on the optimization process. The optimization involves finding the cross-sectional area distribution of a duct that produces velocities that closely match a targeted velocity distribution. Quasi-one-dimensional flow theory is used, and the target is chosen to have a shock wave in its distribution. The objective function which quantifies the difference between the targeted and calculated velocity distributions may become non-smooth due to the interaction between the shock and the discretization of the flowfield. This paper offers two techniques to resolve the resulting problems for the optimization algorithms. The first, shock-fitting, involves careful integration of the objective function through the shock wave. The second, coordinate straining with shock penalty, uses a coordinate transformation to align the calculated shock with the target and then adds a penalty proportional to the square of the distance between the shocks. The techniques are tested using several popular sensitivity and optimization methods, including finite-differences, and direct and adjoint discrete sensitivity methods. Two optimization strategies, Gauss-Newton and sequential quadratic programming (SQP), are used to drive the objective function to a minimum.

  2. Effects of racial and ethnic group and health literacy on responses to genomic risk information in a medically underserved population

    PubMed Central

    Kaphingst, Kimberly A.; Stafford, Jewel D.; McGowan, Lucy D’Agostino; Seo, Joann; Lachance, Christina R.; Goodman, Melody S.

    2015-01-01

    Objective Few studies have examined how individuals respond to genomic risk information for common, chronic diseases. This randomized study examined differences in responses by type of genomic information [genetic test/family history] and disease condition [diabetes/heart disease] and by race/ethnicity in a medically underserved population. Methods 1057 English-speaking adults completed a survey containing one of four vignettes (two-by-two randomized design). Differences in dependent variables (i.e., interest in receiving genomic assessment, discussing with doctor or family, changing health habits) by experimental condition and race/ethnicity were examined using chi-squared tests and multivariable regression analysis. Results No significant differences were found in dependent variables by type of genomic information or disease condition. In multivariable models, Hispanics were more interested in receiving a genomic assessment than Whites (OR=1.93; p<0.0001); respondents with marginal (OR=1.54; p=0.005) or limited (OR=1.85; p=0.009) health literacy had greater interest than those with adequate health literacy. Blacks (OR=1.78; p=0.001) and Hispanics (OR=1.85; p=0.001) had greater interest in discussing information with family than Whites. Non-Hispanic Blacks (OR=1.45; p=0.04) had greater interest in discussing genomic information with a doctor than Whites. Blacks (β= −0.41; p<0.001) and Hispanics (β= −0.25; p=0.033) intended to change fewer health habits than Whites; health literacy was negatively associated with number of health habits participants intended to change. Conclusions Findings suggest that race/ethnicity may affect responses to genomic risk information. Additional research could examine how cognitive representations of this information differ across racial/ethnic groups. Health literacy is also critical to consider in developing approaches to communicating genomic information. PMID:25622080

  3. Game Methodology for Design Methods and Tools Selection

    ERIC Educational Resources Information Center

    Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois

    2014-01-01

    Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…

  4. A rainfall design method for spatial flood risk assessment: considering multiple flood sources

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Tatano, H.

    2015-08-01

    Information about the spatial distribution of flood risk is important for integrated urban flood risk management. Focusing on urban areas, spatial flood risk assessment must reflect all risk information derived from multiple flood sources: rivers, drainage, coastal flooding etc. that may affect the area. However, conventional flood risk assessment deals with each flood source independently, which leads to an underestimation of flood risk in the floodplain. Even in floodplains that have no risk from coastal flooding, flooding from river channels and inundation caused by insufficient drainage capacity should be considered simultaneously. For integrated flood risk management, it is necessary to establish a methodology to estimate flood risk distribution across a floodplain. In this paper, a rainfall design method for spatial flood risk assessment, which considers the joint effects of multiple flood sources, is proposed. The concept of critical rainfall duration determined by the concentration time of flooding is introduced to connect response characteristics of different flood sources with rainfall. A copula method is then adopted to capture the correlation of rainfall amount with different critical rainfall durations. Rainfall events are designed taking advantage of the copula structure of correlation and marginal distribution of rainfall amounts within different critical rainfall durations. A case study in the Otsu River Basin, Osaka prefecture, Japan was conducted to demonstrate this methodology.

  5. An inverse design method for 2D airfoil

    NASA Astrophysics Data System (ADS)

    Liang, Zhi-Yong; Cui, Peng; Zhang, Gen-Bao

    2010-03-01

    The computational method for aerodynamic design of aircraft is applied more universally than before, in which the design of an airfoil is a hot problem. The forward problem is discussed by most relative papers, but inverse method is more useful in practical designs. In this paper, the inverse design of 2D airfoil was investigated. A finite element method based on the variational principle was used for carrying out. Through the simulation, it was shown that the method was fit for the design.

  6. Translating Vision into Design: A Method for Conceptual Design Development

    NASA Technical Reports Server (NTRS)

    Carpenter, Joyce E.

    2003-01-01

    One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.

  7. Materials Reliability Program: Risk-Informed Revision of ASME Section XI Appendix G - Proof of Concept (MRP-143)

    SciTech Connect

    B. Bishop; et al

    2005-03-30

    This study indicates that risk-informed methods can be used to significantly relax the current ASME and NRC Appendix G requirements while still maintaining satisfactory levels of reactor vessel structural integrity. This relaxation in Appendix G requirements directly translates into significant improvements in operational flexibility.

  8. Using Software Design Methods in CALL

    ERIC Educational Resources Information Center

    Ward, Monica

    2006-01-01

    The phrase "software design" is not one that arouses the interest of many CALL practitioners, particularly those from a humanities background. However, software design essentials are simply logical ways of going about designing a system. The fundamentals include modularity, anticipation of change, generality and an incremental approach. While CALL…

  9. Methods and Strategies: Derby Design Day

    ERIC Educational Resources Information Center

    Kennedy, Katheryn

    2013-01-01

    In this article the author describes the "Derby Design Day" project--a project that paired high school honors physics students with second-grade children for a design challenge and competition. The overall project goals were to discover whether collaboration in a design process would: (1) increase an interest in science; (2) enhance the…

  10. An Efficient Inverse Aerodynamic Design Method For Subsonic Flows

    NASA Technical Reports Server (NTRS)

    Milholen, William E., II

    2000-01-01

    Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.

  11. Does genomic risk information motivate people to change their behavior?

    PubMed

    Henrikson, Nora B; Bowen, Deborah; Burke, Wylie

    2009-01-01

    The recent flood of information about new gene variants associated with chronic disease risk from genome-wide association studies has understandably led to enthusiasm that genetic discoveries could reduce disease burdens and increase the availability of direct-to-consumer tests offering risk information. However, we suggest caution: if it is to be any benefit to health, genetic risk information needs to prompt individuals to pursue risk-reduction behaviors, yet early evidence suggests that genetic risk may not be an effective motivator of behavior change. It is not clear how genetic information will inform risk-based behavioral intervention, or what harms might occur. Research is needed that examines the behavioral consequences of genetic risk knowledge in the context of other motivators and social conditions, as well as research that determines the subgroups of people most likely to be motivated, in order to inform policy decisions about emerging genetic susceptibility tests. Without such research, it will not be possible to determine the appropriate health care uses for such tests, the impact on health care resources from consumer-initiated testing, or the criteria for truthful advertising of direct-to-consumer tests. PMID:19341508

  12. Design optimization method for Francis turbine

    NASA Astrophysics Data System (ADS)

    Kawajiri, H.; Enomoto, Y.; Kurosawa, S.

    2014-03-01

    This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.

  13. Alternative methods for the design of jet engine control systems

    NASA Technical Reports Server (NTRS)

    Sain, M. K.; Leake, R. J.; Basso, R.; Gejji, R.; Maloney, A.; Seshadri, V.

    1976-01-01

    Various alternatives to linear quadratic design methods for jet engine control systems are discussed. The main alternatives are classified into two broad categories: nonlinear global mathematical programming methods and linear local multivariable frequency domain methods. Specific studies within these categories include model reduction, the eigenvalue locus method, the inverse Nyquist method, polynomial design, dynamic programming, and conjugate gradient approaches.

  14. Demystifying Mixed Methods Research Design: A Review of the Literature

    ERIC Educational Resources Information Center

    Caruth, Gail D.

    2013-01-01

    Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…

  15. Computational Methods Applied to Rational Drug Design

    PubMed Central

    Ramírez, David

    2016-01-01

    Due to the synergic relationship between medical chemistry, bioinformatics and molecular simulation, the development of new accurate computational tools for small molecules drug design has been rising over the last years. The main result is the increased number of publications where computational techniques such as molecular docking, de novo design as well as virtual screening have been used to estimate the binding mode, site and energy of novel small molecules. In this work I review some tools, which enable the study of biological systems at the atomistic level, providing relevant information and thereby, enhancing the process of rational drug design. PMID:27708723

  16. Supersonic biplane design via adjoint method

    NASA Astrophysics Data System (ADS)

    Hu, Rui

    In developing the next generation supersonic transport airplane, two major challenges must be resolved. The fuel efficiency must be significantly improved, and the sonic boom propagating to the ground must be dramatically reduced. Both of these objectives can be achieved by reducing the shockwaves formed in supersonic flight. The Busemann biplane is famous for using favorable shockwave interaction to achieve nearly shock-free supersonic flight at its design Mach number. Its performance at off-design Mach numbers, however, can be very poor. This dissertation studies the performance of supersonic biplane airfoils at design and off-design conditions. The choked flow and flow-hysteresis phenomena of these biplanes are studied. These effects are due to finite thickness of the airfoils and non-uniqueness of the solution to the Euler equations, creating over an order of magnitude more wave drag than that predicted by supersonic thin airfoil theory. As a result, the off-design performance is the major barrier to the practical use of supersonic biplanes. The main contribution of this work is to drastically improve the off-design performance of supersonic biplanes by using an adjoint based aerodynamic optimization technique. The Busemann biplane is used as the baseline design, and its shape is altered to achieve optimal wave drags in series of Mach numbers ranging from 1.1 to 1.7, during both acceleration and deceleration conditions. The optimized biplane airfoils dramatically reduces the effects of the choked flow and flow-hysteresis phenomena, while maintaining a certain degree of favorable shockwave interaction effects at the design Mach number. Compared to a diamond shaped single airfoil of the same total thickness, the wave drag of our optimized biplane is lower at almost all Mach numbers, and is significantly lower at the design Mach number. In addition, by performing a Navier-Stokes solution for the optimized airfoil, it is verified that the optimized biplane improves

  17. Light Water Reactor Sustainability Program Risk Informed Safety Margin Characterization (RISMC) Advanced Test Reactor Demonstration Case Study

    SciTech Connect

    Curtis Smith; David Schwieder; Cherie Phelan; Anh Bui; Paul Bayless

    2012-08-01

    Safety is central to the design, licensing, operation, and economics of Nuclear Power Plants (NPPs). Consequently, the ability to better characterize and quantify safety margin holds the key to improved decision making about LWR design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margins management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. The purpose of the RISMC Pathway R&D is to support plant decisions for risk-informed margins management with the aim to improve economics, reliability, and sustain safety of current NPPs. Goals of the RISMC Pathway are twofold: (1) Develop and demonstrate a risk-assessment method coupled to safety margin quantification that can be used by NPP decision makers as part of their margin recovery strategies. (2) Create an advanced “RISMC toolkit” that enables more accurate representation of NPP safety margin. This report describes the RISMC methodology demonstration where the Advanced Test Reactor (ATR) was used as a test-bed for purposes of determining safety margins. As part of the demonstration, we describe how both the thermal-hydraulics and probabilistic safety calculations are integrated and used to quantify margin management strategies.

  18. Probabilistic Methods for Structural Design and Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)

    2002-01-01

    This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.

  19. A comparison of digital flight control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Many variations in design methods for aircraft digital flight control have been proposed in the literature. In general, the methods fall into two categories: those where the design is done in the continuous domain (or s-plane), and those where the design is done in the discrete domain (or z-plane). This paper evaluates several variations of each category and compares them for various flight control modes of the Langley TCV Boeing 737 aircraft. Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the 'uncompensated s-plane design' method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  20. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  1. The Application of Climate Risk Informed Decision Analysis to the Ioland Water Treatment Plant in Lusaka, Zambia

    NASA Astrophysics Data System (ADS)

    Kucharski, John; Tkach, Mark; Olszewski, Jennifer; Chaudhry, Rabia; Mendoza, Guillermo

    2016-04-01

    This presentation demonstrates the application of Climate Risk Informed Decision Analysis (CRIDA) at Zambia's principal water treatment facility, The Iolanda Water Treatment Plant. The water treatment plant is prone to unacceptable failures during periods of low hydropower production at the Kafue Gorge Dam Hydroelectric Power Plant. The case study explores approaches of increasing the water treatment plant's ability to deliver acceptable levels of service under the range of current and potential future climate states. The objective of the study is to investigate alternative investments to build system resilience that might have been informed by the CRIDA process, and to evaluate the extra resource requirements by a bilateral donor agency to implement the CRIDA process. The case study begins with an assessment of the water treatment plant's vulnerability to climate change. It does so by following general principals described in "Confronting Climate Uncertainty in Water Resource Planning and Project Design: the Decision Tree Framework". By utilizing relatively simple bootstrapping methods a range of possible future climate states is generated while avoiding the use of more complex and costly downscaling methodologies; that are beyond the budget and technical capacity of many teams. The resulting climate vulnerabilities and uncertainty in the climate states that produce them are analyzed as part of a "Level of Concern" analysis. CRIDA principals are then applied to this Level of Concern analysis in order to arrive at a set of actionable water management decisions. The principal goals of water resource management is to transform variable, uncertain hydrology into dependable services (e.g. water supply, flood risk reduction, ecosystem benefits, hydropower production, etc…). Traditional approaches to climate adaptation require the generation of predicted future climate states but do little guide decision makers how this information should impact decision making. In

  2. The Triton: Design concepts and methods

    NASA Technical Reports Server (NTRS)

    Meholic, Greg; Singer, Michael; Vanryn, Percy; Brown, Rhonda; Tella, Gustavo; Harvey, Bob

    1992-01-01

    During the design of the C & P Aerospace Triton, a few problems were encountered that necessitated changes in the configuration. After the initial concept phase, the aspect ratio was increased from 7 to 7.6 to produce a greater lift to drag ratio (L/D = 13) which satisfied the horsepower requirements (118 hp using the Lycoming O-235 engine). The initial concept had a wing planform area of 134 sq. ft. Detailed wing sizing analysis enlarged the planform area to 150 sq. ft., without changing its layout or location. The most significant changes, however, were made just prior to inboard profile design. The fuselage external diameter was reduced from 54 to 50 inches to reduce drag to meet the desired cruise speed of 120 knots. Also, the nose was extended 6 inches to accommodate landing gear placement. Without the extension, the nosewheel received an unacceptable percentage (25 percent) of the landing weight. The final change in the configuration was made in accordance with the stability and control analysis. In order to reduce the static margin from 20 to 13 percent, the horizontal tail area was reduced from 32.02 to 25.0 sq. ft. The Triton meets all the specifications set forth in the design criteria. If time permitted another iteration of the calculations, two significant changes would be made. The vertical stabilizer area would be reduced to decrease the aircraft lateral stability slope since the current value was too high in relation to the directional stability slope. Also, the aileron size would be decreased to reduce the roll rate below the current 106 deg/second. Doing so would allow greater flap area (increasing CL(sub max)) and thus reduce the overall wing area. C & P would also recalculate the horsepower and drag values to further validate the 120 knot cruising speed.

  3. Skin Cancer Concerns and Genetic Risk Information-Seeking in Primary Care

    PubMed Central

    Hay, J.; Kaphingst, K.A.; Baser, R.; Li, Y.; Hensley-Alford, S.; McBride, C.M.

    2012-01-01

    Background Genomic testing for common genetic variants associated with skin cancer risk could enable personalized risk feedback to motivate skin cancer screening and sun protection. Methods In a cross-sectional study, we investigated whether skin cancer cognitions and behavioral factors, sociodemographics, family factors, and health information-seeking were related to perceived importance of learning about how (a) genes and (b) health habits affect personal health risks using classification and regression trees (CART). Results The sample (n = 1,772) was collected in a large health maintenance organization as part of the Multiplex Initiative, ranged in age from 25–40, was 53% female, 41% Caucasian, and 59% African-American. Most reported that they placed somewhat to very high importance on learning about how genes (79%) and health habits (88%) affect their health risks. Social influence actors were associated with information-seeking about genes and health habits. Awareness of family history was associated with importance of health habit, but not genetic, information-seeking. Conclusions The investment of family and friends in health promotion may be a primary motivator for prioritizing information-seeking about how genes and health habits affect personal health risks and may contribute to the personal value, or personal utility, of risk information. Individuals who seek such risk information may be receptive to interventions aimed to maximize the social implications of healthy lifestyle change to reduce their health risks. PMID:21921576

  4. Setting risk-informed environmental standards for Bacillus anthracis spores.

    PubMed

    Hong, Tao; Gurian, Patrick L; Ward, Nicholas F Dudley

    2010-10-01

    In many cases, human health risk from biological agents is associated with aerosol exposures. Because air concentrations decline rapidly after a release, it may be necessary to use concentrations found in other environmental media to infer future or past aerosol exposures. This article presents an approach for linking environmental concentrations of Bacillus. anthracis (B. anthracis) spores on walls, floors, ventilation system filters, and in human nasal passages with human health risk from exposure to B. anthracis spores. This approach is then used to calculate example values of risk-informed concentration standards for both retrospective risk mitigation (e.g., prophylactic antibiotics) and prospective risk mitigation (e.g., environmental clean up and reoccupancy). A large number of assumptions are required to calculate these values, and the resulting values have large uncertainties associated with them. The values calculated here suggest that documenting compliance with risks in the range of 10(-4) to 10(-6) would be challenging for small diameter (respirable) spore particles. For less stringent risk targets and for releases of larger diameter particles (which are less respirable and hence less hazardous), environmental sampling would be more promising.

  5. Defining resilience within a risk-informed assessment framework

    SciTech Connect

    Coles, Garill A.; Unwin, Stephen D.; Holter, Gregory M.; Bass, Robert B.; Dagle, Jeffery E.

    2011-08-01

    The concept of resilience is the subject of considerable discussion in academic, business, and governmental circles. The United States Department of Homeland Security for one has emphasised the need to consider resilience in safeguarding critical infrastructure and key resources. The concept of resilience is complex, multidimensional, and defined differently by different stakeholders. The authors contend that there is a benefit in moving from discussing resilience as an abstraction to defining resilience as a measurable characteristic of a system. This paper proposes defining resilience measures using elements of a traditional risk assessment framework to help clarify the concept of resilience and as a way to provide non-traditional risk information. The authors show various, diverse dimensions of resilience can be quantitatively defined in a common risk assessment framework based on the concept of loss of service. This allows the comparison of options for improving the resilience of infrastructure and presents a means to perform cost-benefit analysis. This paper discusses definitions and key aspects of resilience, presents equations for the risk of loss of infrastructure function that incorporate four key aspects of resilience that could prevent or mitigate that loss, describes proposed resilience factor definitions based on those risk impacts, and provides an example that illustrates how resilience factors would be calculated using a hypothetical scenario.

  6. A survey on methods of design features identification

    NASA Astrophysics Data System (ADS)

    Grabowik, C.; Kalinowski, K.; Paprocka, I.; Kempa, W.

    2015-11-01

    It is widely accepted that design features are one of the most attractive integration method of most fields of engineering activities such as a design modelling, process planning or production scheduling. One of the most important tasks which are realized in the integration process of design and planning functions is a design translation meant as design data mapping into data which are important from process planning needs point of view, it is manufacturing data. A design geometrical shape translation process can be realized with application one of the following strategies: (i) designing with previously prepared design features library also known as DBF method it is design by feature, (ii) interactive design features recognition IFR, (iii) automatic design features recognition AFR. In case of the DBF method design geometrical shape is created with design features. There are two basic approaches for design modelling in DBF method it is classic in which a part design is modelled from beginning to end with application design features previously stored in a design features data base and hybrid where part is partially created with standard predefined CAD system tools and the rest with suitable design features. Automatic feature recognition consist in an autonomic searching of a product model represented with a specific design representation method in order to find those model features which might be potentially recognized as design features, manufacturing features, etc. This approach needs the searching algorithm to be prepared. The searching algorithm should allow carrying on the whole recognition process without a user supervision. Currently there are lots of AFR methods. These methods need the product model to be represented with B-Rep representation most often, CSG rarely, wireframe very rarely. In the IFR method potential features are being recognized by a user. This process is most often realized by a user who points out those surfaces which seem to belong to a

  7. A flexible layout design method for passive micromixers.

    PubMed

    Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui

    2012-10-01

    This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer. PMID:22736305

  8. Method for designing and controlling compliant gripper

    NASA Astrophysics Data System (ADS)

    Spanu, A. R.; Besnea, D.; Avram, M.; Ciobanu, R.

    2016-08-01

    The compliant grippers are useful for high accuracy grasping of small objects with adaptive control of contact points along the active surfaces of the fingers. The spatial trajectories of the elements become a must, due to the development of MEMS. The paper presents the solution for the compliant gripper designed by the authors, so the planar and spatial movements are discussed. At the beginning of the process, the gripper could work as passive one just for the moment when it has to reach out the object surface. The forces provided by the elements have to avoid the damage. As part of the system, the camera is taken picture of the object, in order to facilitate the positioning of the system. When the contact is established, the mechanism is acting as an active gripper by using an electrical stepper motor, which has controlled movement.

  9. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  10. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  11. Analyses in support of risk-informed natural gas vehicle maintenance facility codes and standards :

    SciTech Connect

    Ekoto, Isaac W.; Blaylock, Myra L.; LaFleur, Angela Christine; LaChance, Jeffrey L.; Horne, Douglas B.

    2014-03-01

    Safety standards development for maintenance facilities of liquid and compressed gas fueled large-scale vehicles is required to ensure proper facility design and operation envelopes. Standard development organizations are utilizing risk-informed concepts to develop natural gas vehicle (NGV) codes and standards so that maintenance facilities meet acceptable risk levels. The present report summarizes Phase I work for existing NGV repair facility code requirements and highlights inconsistencies that need quantitative analysis into their effectiveness. A Hazardous and Operability study was performed to identify key scenarios of interest. Finally, scenario analyses were performed using detailed simulations and modeling to estimate the overpressure hazards from HAZOP defined scenarios. The results from Phase I will be used to identify significant risk contributors at NGV maintenance facilities, and are expected to form the basis for follow-on quantitative risk analysis work to address specific code requirements and identify effective accident prevention and mitigation strategies.

  12. Risk-Informing Safety Reviews for Non-Reactor Nuclear Facilities

    SciTech Connect

    Mubayi, V.; Azarm, A.; Yue, M.; Mukaddam, W.; Good, G.; Gonzalez, F.; Bari, R.A.

    2011-03-13

    This paper describes a methodology used to model potential accidents in fuel cycle facilities that employ chemical processes to separate and purify nuclear materials. The methodology is illustrated with an example that uses event and fault trees to estimate the frequency of a specific energetic reaction that can occur in nuclear material processing facilities. The methodology used probabilistic risk assessment (PRA)-related tools as well as information about the chemical reaction characteristics, information on plant design and operational features, and generic data about component failure rates and human error rates. The accident frequency estimates for the specific reaction help to risk-inform the safety review process and assess compliance with regulatory requirements.

  13. Analytical techniques for instrument design - matrix methods

    SciTech Connect

    Robinson, R.A.

    1997-09-01

    We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.

  14. Analytical techniques for instrument design -- Matrix methods

    SciTech Connect

    Robinson, R.A.

    1997-12-31

    The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.

  15. Quantification of margins and uncertainty for risk-informed decision analysis.

    SciTech Connect

    Alvin, Kenneth Fredrick

    2010-09-01

    QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.

  16. HEALTHY study rationale, design and methods

    PubMed Central

    2009-01-01

    The HEALTHY primary prevention trial was designed and implemented in response to the growing numbers of children and adolescents being diagnosed with type 2 diabetes. The objective was to moderate risk factors for type 2 diabetes. Modifiable risk factors measured were indicators of adiposity and glycemic dysregulation: body mass index ≥85th percentile, fasting glucose ≥5.55 mmol l-1 (100 mg per 100 ml) and fasting insulin ≥180 pmol l-1 (30 μU ml-1). A series of pilot studies established the feasibility of performing data collection procedures and tested the development of an intervention consisting of four integrated components: (1) changes in the quantity and nutritional quality of food and beverage offerings throughout the total school food environment; (2) physical education class lesson plans and accompanying equipment to increase both participation and number of minutes spent in moderate-to-vigorous physical activity; (3) brief classroom activities and family outreach vehicles to increase knowledge, enhance decision-making skills and support and reinforce youth in accomplishing goals; and (4) communications and social marketing strategies to enhance and promote changes through messages, images, events and activities. Expert study staff provided training, assistance, materials and guidance for school faculty and staff to implement the intervention components. A cohort of students were enrolled in sixth grade and followed to end of eighth grade. They attended a health screening data collection at baseline and end of study that involved measurement of height, weight, blood pressure, waist circumference and a fasting blood draw. Height and weight were also collected at the end of the seventh grade. The study was conducted in 42 middle schools, six at each of seven locations across the country, with 21 schools randomized to receive the intervention and 21 to act as controls (data collection activities only). Middle school was the unit of sample size and

  17. Method speeds tapered rod design for directional well

    SciTech Connect

    Hu Yongquan; Yuan Xiangzhong

    1995-10-16

    Determination of the minimum rod diameter, from statistical relationships, can decrease the time needed for designing a sucker-rod string for a directional well. A tapered rod string design for a directional well is more complex than for a vertical well. Based on the theory of a continuous beam column, the rod string design in a directional well is a trial and error method. The key to reduce the time to obtain a solution is to rapidly determine the minimum rod diameter. This can be done with a statistical relationship. The paper describes sucker rods, design method, basic analysis rod design, and minimum rod diameter.

  18. Inhalation exposure systems: design, methods and operation.

    PubMed

    Wong, Brian A

    2007-01-01

    The respiratory system, the major route for entry of oxygen into the body, provides entry for external compounds, including pharmaceutic and toxic materials. These compounds (that might be inhaled under environmental, occupational, medical, or other situations) can be administered under controlled conditions during laboratory inhalation studies. Inhalation study results may be controlled or adversely affected by variability in four key factors: animal environment; exposure atmosphere; inhaled dose; and individual animal biological response. Three of these four factors can be managed through engineering processes. Variability in the animal environment is reduced by engineering control of temperature, humidity, oxygen content, waste gas content, and noise in the exposure facility. Exposure atmospheres are monitored and adjusted to assure a consistent and known exposure for each animal dose group. The inhaled dose, affected by changes in respiration physiology, may be controlled by exposure-specific monitoring of respiration. Selection of techniques and methods for the three factors affected by engineering allows the toxicologic pathologist to study the reproducibility of the fourth factor, the biological response of the animal. PMID:17325967

  19. A new interval optimization method considering tolerance design

    NASA Astrophysics Data System (ADS)

    Jiang, C.; Xie, H. C.; Zhang, Z. G.; Han, X.

    2015-12-01

    This study considers the design variable uncertainty in the actual manufacturing process for a product or structure and proposes a new interval optimization method based on tolerance design, which can provide not only an optimal design but also the allowable maximal manufacturing errors that the design can bear. The design variables' manufacturing errors are depicted using the interval method, and an interval optimization model for the structure is constructed. A dimensionless design tolerance index is defined to describe the overall uncertainty of all design variables, and by combining the nominal objective function, a deterministic two-objective optimization model is built. The possibility degree of interval is used to represent the reliability of the constraints under uncertainty, through which the model is transformed to a deterministic optimization problem. Three numerical examples are investigated to verify the effectiveness of the present method.

  20. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  1. What Can Mixed Methods Designs Offer Professional Development Program Evaluators?

    ERIC Educational Resources Information Center

    Giordano, Victoria; Nevin, Ann

    2007-01-01

    In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…

  2. Turbine blade fixture design using kinematic methods and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Bausch, John J., III

    2000-10-01

    The design of fixtures for turbine blades is a difficult problem even for experience toolmakers. Turbine blades are characterized by complex 3D surfaces, high performance materials that are difficult to manufacture, close tolerance finish requirements, and high precision machining accuracy. Tool designers typically rely on modified designs based on experience, but have no analytical tools to guide or even evaluate their designs. This paper examines the application of kinematic algorithms to the design of six-point-nest, seventh-point-clamp datum transfer fixtures for turbine blade production. The kinematic algorithms, based on screw coordinate theory, are computationally intensive. When used in a blind search mode the time required to generate an actual design is unreasonable. In order to reduce the computation time, the kinematic methods are combined with genetic algorithms and a set of heuristic design rules to guide the search. The kinematic, genetic, and heuristic methods were integrated within a fixture design module as part of the Unigraphics CAD system used by Pratt and Whitney. The kinematic design module was used to generate a datum transfer fixture design for a standard production turbine blade. This design was then used to construct an actual fixture, and compared to the existing production fixture for the same part. The positional accuracy of both designs was compared using a coordinate measurement machine (CMM). Based on the CMM data, the observed variation of kinematic design was over two orders-of-magnitude less than for the production design resulting in greatly improved accuracy.

  3. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  4. Expanding color design methods for architecture and allied disciplines

    NASA Astrophysics Data System (ADS)

    Linton, Harold E.

    2002-06-01

    The color design processes of visual artists, architects, designers, and theoreticians included in this presentation reflect the practical role of color in architecture. What the color design professional brings to the architectural design team is an expertise and rich sensibility made up of a broad awareness and a finely tuned visual perception. This includes a knowledge of design and its history, expertise with industrial color materials and their methods of application, an awareness of design context and cultural identity, a background in physiology and psychology as it relates to human welfare, and an ability to problem-solve and respond creatively to design concepts with innovative ideas. The broadening of the definition of the colorists's role in architectural design provides architects, artists and designers with significant opportunities for continued professional and educational development.

  5. Aerodynamic design optimization by using a continuous adjoint method

    NASA Astrophysics Data System (ADS)

    Luo, JiaQi; Xiong, JunTao; Liu, Feng

    2014-07-01

    This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.

  6. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  7. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... made under the provisions of 40 CFR part 53, as ] amended on August 31, 2011 (76 FR 54326-54341). The... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method...

  8. An artificial viscosity method for the design of supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Mcfadden, G. B.

    1979-01-01

    A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.

  9. Numerical methods for aerothermodynamic design of hypersonic space transport vehicles

    NASA Astrophysics Data System (ADS)

    Wanie, K. M.; Brenneis, A.; Eberle, A.; Heiss, S.

    1993-04-01

    The requirement of the design process of hypersonic vehicles to predict flow past entire configurations with wings, fins, flaps, and propulsion system represents one of the major challenges for aerothermodynamics. In this context computational fluid dynamics has come up as a powerful tool to support the experimental work. A couple of numerical methods developed at MBB designed to fulfill the needs of the design process are described. The governing equations and fundamental details of the solution methods are shortly reviewed. Results are given for both geometrically simple test cases and realistic hypersonic configurations. Since there is still a considerable lack of experience for hypersonic flow calculations an extensive testing and verification is essential. This verification is done by comparison of results with experimental data and other numerical methods. The results presented prove that the methods used are robust, flexible, and accurate enough to fulfill the strong needs of the design process.

  10. New directions for Artificial Intelligence (AI) methods in optimum design

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1989-01-01

    Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.

  11. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  12. Investigating the Use of Design Methods by Capstone Design Students at Clemson University

    ERIC Educational Resources Information Center

    Miller, W. Stuart; Summers, Joshua D.

    2013-01-01

    The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…

  13. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  14. Design method for four-reflector type beam waveguide systems

    NASA Technical Reports Server (NTRS)

    Betsudan, S.; Katagi, T.; Urasaki, S.

    1986-01-01

    Discussed is a method for the design of four reflector type beam waveguide feed systems, comprised of a conical horn and 4 focused reflectors, which are used widely as the primary reflector systems for communications satellite Earth station antennas. The design parameters for these systems are clarified, the relations between each parameter are brought out based on the beam mode development, and the independent design parameters are specified. The characteristics of these systems, namely spillover loss, crosspolarization components, and frequency characteristics, and their relation to the design parameters, are also shown. It is also indicated that design parameters which decide the dimensions of the conical horn or the shape of the focused reflectors can be unerringly established once the design standard for the system has been selected as either: (1) minimizing the crosspolarization component by keeping the spillover loss to within acceptable limits, or (2) minimizing the spillover loss by maintaining the crossover components below an acceptable level and the independent design parameters, such as the respective sizes of the focused reflectors and the distances between the focussed reflectors, etc., have been established according to mechanical restrictions. A sample design is also shown. In addition to being able to clarify the effects of each of the design parameters on the system and improving insight into these systems, the efficiency of these systems will also be increased with this design method.

  15. Epidemiological designs for vaccine safety assessment: methods and pitfalls.

    PubMed

    Andrews, Nick

    2012-09-01

    Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues. PMID:21985898

  16. Epidemiological designs for vaccine safety assessment: methods and pitfalls.

    PubMed

    Andrews, Nick

    2012-09-01

    Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues.

  17. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  18. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  19. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  20. Achieving a Risk-Informed Decision-Making Environment at NASA: The Emphasis of NASA's Risk Management Policy

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon

    2010-01-01

    This slide presentation reviews the evolution of risk management (RM) at NASA. The aim of the RM approach at NASA is to promote an approach that is heuristic, proactive, and coherent across all of NASA. Risk Informed Decision Making (RIDM) is a decision making process that uses a diverse set of performance measures along with other considerations within a deliberative process to inform decision making. RIDM is invoked for key decisions such as architecture and design decisions, make-buy decisions, and budget reallocation. The RIDM process and how it relates to the continuous Risk Management (CRM) process is reviewed.

  1. The Design with Intent Method: a design tool for influencing user behaviour.

    PubMed

    Lockton, Dan; Harrison, David; Stanton, Neville A

    2010-05-01

    Using product and system design to influence user behaviour offers potential for improving performance and reducing user error, yet little guidance is available at the concept generation stage for design teams briefed with influencing user behaviour. This article presents the Design with Intent Method, an innovation tool for designers working in this area, illustrated via application to an everyday human-technology interaction problem: reducing the likelihood of a customer leaving his or her card in an automatic teller machine. The example application results in a range of feasible design concepts which are comparable to existing developments in ATM design, demonstrating that the method has potential for development and application as part of a user-centred design process.

  2. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  3. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  4. Optimal Input Signal Design for Data-Centric Estimation Methods

    PubMed Central

    Deshpande, Sunil; Rivera, Daniel E.

    2013-01-01

    Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042

  5. Test methods and design allowables for fibrous composites. Volume 2

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C. (Editor)

    1989-01-01

    Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.

  6. Tradeoff methods in multiobjective insensitive design of airplane control systems

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Giesy, D. P.

    1984-01-01

    The latest results of an ongoing study of computer-aided design of airplane control systems are given. Constrained minimization algorithms are used, with the design objectives in the constraint vector. The concept of Pareto optimiality is briefly reviewed. It is shown how an experienced designer can use it to find designs which are well-balanced in all objectives. Then the problem of finding designs which are insensitive to uncertainty in system parameters are discussed, introducing a probabilistic vector definition of sensitivity which is consistent with the deterministic Pareto optimal problem. Insensitivity is important in any practical design, but it is particularly important in the design of feedback control systems, since it is considered to be the most important distinctive property of feedback control. Methods of tradeoff between deterministic and stochastic-insensitive (SI) design are described, and tradeoff design results are presented for the example of the a Shuttle lateral stability augmentation system. This example is used because careful studies have been made of the uncertainty in Shuttle aerodynamics. Finally, since accurate statistics of uncertain parameters are usually not available, the effects of crude statistical models on SI designs are examined.

  7. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  8. A new method of VLSI conform design for MOS cells

    NASA Astrophysics Data System (ADS)

    Schmidt, K. H.; Wach, W.; Mueller-Glaser, K. D.

    An automated method for the design of specialized SSI/LSI-level MOS cells suitable for incorporation in VLSI chips is described. The method uses the symbolic-layout features of the CABBAGE computer program (Hsueh, 1979; De Man et al., 1982), but restricted by a fixed grid system to facilitate compaction procedures. The techniques used are shown to significantly speed the processes of electrical design, layout, design verification, and description for subsequent CAD/CAM application. In the example presented, a 211-transistor, parallel-load, synchronous 4-bit up/down binary counter cell was designed in 9 days, as compared to 30 days for a manually-optimized-layout version and 3 days for a larger, less efficient cell designed by a programmable logic array; the cell areas were 0.36, 0.21, and 0.79 sq mm, respectively. The primary advantage of the method is seen in the extreme ease with which the cell design can be adapted to new parameters or design rules imposed by improvements in technology.

  9. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... March 6, 2009. The monitors are commercially available from the applicant, Thermo Fisher Scientific, Air... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent... of the designation of five new equivalent methods for monitoring ambient air quality. SUMMARY:...

  10. Method for Enzyme Design with Genetically Encoded Unnatural Amino Acids.

    PubMed

    Hu, C; Wang, J

    2016-01-01

    We describe the methodologies for the design of artificial enzymes with genetically encoded unnatural amino acids. Genetically encoded unnatural amino acids offer great promise for constructing artificial enzymes with novel activities. In our studies, the designs of artificial enzyme were divided into two steps. First, we considered the unnatural amino acids and the protein scaffold separately. The scaffold is designed by traditional protein design methods. The unnatural amino acids are inspired by natural structure and organic chemistry methods, and synthesized by either organic chemistry methods or enzymatic conversion. With the increasing number of published unnatural amino acids with various functions, we described an unnatural amino acids toolkit containing metal chelators, redox mediators, and click chemistry reagents. These efforts enable a researcher to search the toolkit for appropriate unnatural amino acids for the study, rather than design and synthesize the unnatural amino acids from the beginning. After the first step, the model enzyme was optimized by computational methods and directed evolution. Lastly, we describe a general method for evolving aminoacyl-tRNA synthetase and expressing unnatural amino acids incorporated into a protein. PMID:27586330

  11. Method for Enzyme Design with Genetically Encoded Unnatural Amino Acids.

    PubMed

    Hu, C; Wang, J

    2016-01-01

    We describe the methodologies for the design of artificial enzymes with genetically encoded unnatural amino acids. Genetically encoded unnatural amino acids offer great promise for constructing artificial enzymes with novel activities. In our studies, the designs of artificial enzyme were divided into two steps. First, we considered the unnatural amino acids and the protein scaffold separately. The scaffold is designed by traditional protein design methods. The unnatural amino acids are inspired by natural structure and organic chemistry methods, and synthesized by either organic chemistry methods or enzymatic conversion. With the increasing number of published unnatural amino acids with various functions, we described an unnatural amino acids toolkit containing metal chelators, redox mediators, and click chemistry reagents. These efforts enable a researcher to search the toolkit for appropriate unnatural amino acids for the study, rather than design and synthesize the unnatural amino acids from the beginning. After the first step, the model enzyme was optimized by computational methods and directed evolution. Lastly, we describe a general method for evolving aminoacyl-tRNA synthetase and expressing unnatural amino acids incorporated into a protein.

  12. Developing Conceptual Hypersonic Airbreathing Engines Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Ferlemann, Shelly M.; Robinson, Jeffrey S.; Martin, John G.; Leonard, Charles P.; Taylor, Lawrence W.; Kamhawi, Hilmi

    2000-01-01

    Designing a hypersonic vehicle is a complicated process due to the multi-disciplinary synergy that is required. The greatest challenge involves propulsion-airframe integration. In the past, a two-dimensional flowpath was generated based on the engine performance required for a proposed mission. A three-dimensional CAD geometry was produced from the two-dimensional flowpath for aerodynamic analysis, structural design, and packaging. The aerodynamics, engine performance, and mass properties arc inputs to the vehicle performance tool to determine if the mission goals were met. If the mission goals were not met, then a flowpath and vehicle redesign would begin. This design process might have to be performed several times to produce a "closed" vehicle. This paper will describe an attempt to design a hypersonic cruise vehicle propulsion flowpath using a Design of' Experiments method to reduce the resources necessary to produce a conceptual design with fewer iterations of the design cycle. These methods also allow for more flexible mission analysis and incorporation of additional design constraints at any point. A design system was developed using an object-based software package that would quickly generate each flowpath in the study given the values of the geometric independent variables. These flowpath geometries were put into a hypersonic propulsion code and the engine performance was generated. The propulsion results were loaded into statistical software to produce regression equations that were combined with an aerodynamic database to optimize the flowpath at the vehicle performance level. For this example, the design process was executed twice. The first pass was a cursory look at the independent variables selected to determine which variables are the most important and to test all of the inputs to the optimization process. The second cycle is a more in-depth study with more cases and higher order equations representing the design space.

  13. EVALUATING INTERNAL STAKEHOLDER PERSPECTIVES ON RISK-INFORMED REGULATORY PRACTICES FOR THE NUCLEAR REGULATORY COMMISSION

    SciTech Connect

    Peterson, L.K.; Wight, E.H.; Caruso, M.A.

    2003-02-27

    The U.S. Nuclear Regulatory Commission's (NRC) Office of Nuclear Reactor Regulation has begun a program to create a risk-informed environment within the reactor program. The first step of the process is to evaluate the existing environment and internal NRC stakeholder perceptions of risk-informed regulatory practices. This paper reports on the results of the first phase of this evaluation: assessing the current environment, including the level of acceptance of risk-informed approaches throughout the reactor program, the level of integration, areas of success, and areas of difficulty. The other two phases of the evaluation will identify barriers to the integration of risk into NRC activities and gather input on how to move to a risk-informed environment.

  14. A Tutorial on Probablilistic Risk Assessement and its Role in Risk-Informed Decision Making

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon

    2010-01-01

    This slide presentation reviews risk assessment and its role in risk-informed decision making. It includes information on probabilistic risk assessment, typical risk management process, origins of risk matrix, performance measures, performance objectives and Bayes theorem.

  15. Can genetic risk information enhance motivation for smoking cessation? An analogue study.

    PubMed

    Wright, Alison J; French, David P; Weinman, John; Marteau, Theresa M

    2006-11-01

    Protection motivation theory and the extended parallel processing model are used to predict the motivational impact of information regarding a genetic susceptibility to heart disease. One hundred ninety-eight smokers read 1 of 3 vignettes: gene positive, gene negative, or standard smoking risk information. Analyses examined whether the impact of type of risk information was moderated by smokers' self-efficacy (SE) levels. Key outcomes were intention to quit and intention to attend an information session about quitting. There were significant main effects of SE and of receiving gene-positive risk information on intentions to quit. There was a significant Risk x SE interaction on intentions to attend an information session. SE was not associated with intentions to attend the information session for smokers in the gene-positive group. Intentions to attend the session were negatively associated with SE for smokers in the lower risk groups. Implications for using genetic risk information to motivate smoking cessation are discussed.

  16. A decentralized linear quadratic control design method for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1990-01-01

    A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass

  17. Climate Risk Informed Decision Analysis: A Hypothetical Application to the Waas Region

    NASA Astrophysics Data System (ADS)

    Gilroy, Kristin; Mens, Marjolein; Haasnoot, Marjolijn; Jeuken, Ad

    2016-04-01

    More frequent and intense hydrologic events under climate change are expected to enhance water security and flood risk management challenges worldwide. Traditional planning approaches must be adapted to address climate change and develop solutions with an appropriate level of robustness and flexibility. The Climate Risk Informed Decision Analysis (CRIDA) method is a novel planning approach embodying a suite of complementary methods, including decision scaling and adaptation pathways. Decision scaling offers a bottom-up approach to assess risk and tailors the complexity of the analysis to the problem at hand and the available capacity. Through adaptation pathway,s an array of future strategies towards climate robustness are developed, ranging in flexibility and immediacy of investments. Flexible pathways include transfer points to other strategies to ensure that the system can be adapted if future conditions vary from those expected. CRIDA combines these two approaches in a stakeholder driven process which guides decision makers through the planning and decision process, taking into account how the confidence in the available science, the consequences in the system, and the capacity of institutions should influence strategy selection. In this presentation, we will explain the CRIDA method and compare it to existing planning processes, such as the US Army Corps of Engineers Principles and Guidelines as well as Integrated Water Resources Management Planning. Then, we will apply the approach to a hypothetical case study for the Waas Region, a large downstream river basin facing rapid development threatened by increased flood risks. Through the case study, we will demonstrate how a stakeholder driven process can be used to evaluate system robustness to climate change; develop adaptation pathways for multiple objectives and criteria; and illustrate how varying levels of confidence, consequences, and capacity would play a role in the decision making process, specifically

  18. Study on Communication System of Social Risk Information on Nuclear Energy

    SciTech Connect

    Hidekazu Yoshikawa; Toshio Sugiman; Yasunaga Wakabayashi; Hiroshi Shimoda; Mika Terado; Mariko Akimoto; Yoshihiko Nagasato

    2004-07-01

    As a new risk communication method for the construction of effective knowledge bases about 'safety and non-anxiety for nuclear energy', a study on new communication method of social risk information by means of electronic communication has been started, by noticing rapid expansion of internet usage in the society. The purpose of this research is to enhance the public acceptance to nuclear power in Japan by the following two aspects. The first is to develop the mutual communication system among the working persons involved in both the operation and maintenance activities for nuclear power plant, by which they will exchange their daily experiences to improve the safety conscious activities to foster 'safety culture' attitude. The other is the development of an effective risk communication system between nuclear society and the general publics about the hot issues of 'what are the concerned involved in the final disposal of high-level radioactive waste?' and 'what should we do to have social consensus to deal with this issue in future'. The authors' research plan for the above purpose is summarized as shown in Table 1. As the first step of the authors' three year research project which started from August 2003, social investigation by questionnaires by internet and postal mail, have been just recently conducted on their risk perception for the nuclear power for the people engaged in nuclear business and women in the metropolitan area, respectively, in order to obtain the relevant information on how and what should be considered for constructing effective risk communication methods of social risk information between the people within nuclear industries and the general public in society. Although there need to be discussed, the contrasting risk images as shown in Fig.1, can be depicted between the nuclear people and general public these days in Japan, from the results of the social investigation. As the conclusion of the authors' study thus far conducted, the

  19. Rotordynamics and Design Methods of an Oil-Free Turbocharger

    NASA Technical Reports Server (NTRS)

    Howard, Samuel A.

    1999-01-01

    The feasibility of supporting a turbocharger rotor on air foil bearings is investigated based upon predicted rotordynamic stability, load accommodations, and stress considerations. It is demonstrated that foil bearings offer a plausible replacement for oil-lubricated bearings in diesel truck turbochargers. Also, two different rotor configurations are analyzed and the design is chosen which best optimizes the desired performance characteristics. The method of designing machinery for foil bearing use and the assumptions made are discussed.

  20. Mixed methods research design for pragmatic psychoanalytic studies.

    PubMed

    Tillman, Jane G; Clemence, A Jill; Stevens, Jennifer L

    2011-10-01

    Calls for more rigorous psychoanalytic studies have increased over the past decade. The field has been divided by those who assert that psychoanalysis is properly a hermeneutic endeavor and those who see it as a science. A comparable debate is found in research methodology, where qualitative and quantitative methods have often been seen as occupying orthogonal positions. Recently, Mixed Methods Research (MMR) has emerged as a viable "third community" of research, pursuing a pragmatic approach to research endeavors through integrating qualitative and quantitative procedures in a single study design. Mixed Methods Research designs and the terminology associated with this emerging approach are explained, after which the methodology is explored as a potential integrative approach to a psychoanalytic human science. Both qualitative and quantitative research methods are reviewed, as well as how they may be used in Mixed Methods Research to study complex human phenomena.

  1. Scenario building as an ergonomics method in consumer product design.

    PubMed

    Suri, J F; Marsh, M

    2000-04-01

    The role of human factors in design appears to have broadened from data analysis and interpretation into application of discovery and "user experience" design. The human factors practitioner is continually in search of ways to enhance and to better communicate their contributions, as well as to raise the prominence of the user at all stages of the design process. In work with design teams on the development of many consumer products, scenario building has proved to be a valuable addition to the repertoire of more traditional human factors methods. It is a powerful exploration, prototyping and communication tool, and is particularly useful early on in the product design process. This paper describes some advantages and potential pitfalls in using scenarios, and provides examples of how and where they can be usefully applied.

  2. Design of large Francis turbine using optimal methods

    NASA Astrophysics Data System (ADS)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  3. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  4. Improved method for transonic airfoil design-by-optimization

    NASA Technical Reports Server (NTRS)

    Kennelly, R. A., Jr.

    1983-01-01

    An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.

  5. Risk Information Exposure and Direct to Consumer Genetic Testing for BRCA Mutations among Women with a Personal or Family History of Breast or Ovarian Cancer

    PubMed Central

    Gray, Stacy W.; O’Grady, Cristin; Karp, Lauren; Smith, Daniel; Schwartz, J. Sanford; Hornik, Robert C.; Armstrong, Katrina

    2009-01-01

    Background Direct to consumer (DTC) BRCA testing may expand access to genetic testing and enhance cancer prevention efforts. However, it is not know if current DTC websites provide adequate risk information for informed medical decision-making. Methods 284 women with a personal or family history of breast/ovarian cancer were randomly assigned to view a “mock” DTC commercial website (control condition: CC, n=93) or the same “mock” website that included information on the potential risks of obtaining genetic testing online. Risk information was framed two ways: risk information attributed to expert sources (ES, n=98) and unattributed risk information (URI, n=93). Participants completed an online survey. Endpoints were intentions to get BRCA testing, testing site preference and beliefs about DTC BRCA testing. Results Sample characteristics: mean age 39 (range 18–70), 82% white, mean education 3 yrs. college. Women exposed to risk information had lower intentions to get BRCA testing than women in the CC (adjusted odds ratio (OR) 0.48; 95% confidence interval (CI) 0.26–0.87, p=0.016), less positive beliefs about online BRCA testing (adjusted OR 0.48; 95% 0.27–0.86, p=0.014). Women in the ES condition were more likely to prefer clinic based testing than women in the CC (adjusted OR 2.05; 95% CI 1.07–3.90, p=0.030). Conclusion Exposing women to information on the potential risks of online BRCA testing altered their intentions, beliefs and preferences for BRCA testing. Policy makers may want to consider content and framing of risk information on DTC websites as they formulate regulation for this rapidly growing industry. PMID:19318436

  6. An uncertain multidisciplinary design optimization method using interval convex models

    NASA Astrophysics Data System (ADS)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  7. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).

  8. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and

  9. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  10. Planning for risk-informed/performance-based fire protection at nuclear power plants. Final report

    SciTech Connect

    Najafi, B.; Parkinson, W.J.; Lee, J.A.

    1997-12-01

    This document presents a framework for discussing issues and building consensus towards use of fire modeling and risk technology in nuclear power plant fire protection program implementation. The plan describes a three-phase approach: development of core technologies, implementation of methods, and finally, case studies and pilot applications to verify viability of such methods. The core technologies are defined as fire modeling, fire and system tests, use of operational data, and system and risk techniques. The implementation phase addresses the programmatic issues involved in implementing a risk-informed/performance-based approach in an integrated approach with risk/performance measures. The programmatic elements include: (1) a relationship with fire codes and standards development as defined by the ongoing effort of NFPA for development of performance-based standards; (2) the ability for NRC to undertake inspection and enforcement; and (3) the benefit to utilities in terms of cost versus safety. The case studies are intended to demonstrate applicability of single issue resolution while pilot applications are intended to check the applicability of the integrated program as a whole.

  11. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  12. Function combined method for design innovation of children's bike

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan

    2013-03-01

    As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.

  13. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  14. System Synthesis in Preliminary Aircraft Design Using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and early preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically Design of Experiments (DOE) and Response Surface Methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an Overall Evaluation Criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in an innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting in solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a High Speed Civil Transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabilistic designs (and eventually robust ones).

  15. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  16. An interdisciplinary heuristic evaluation method for universal building design.

    PubMed

    Afacan, Yasemin; Erbug, Cigdem

    2009-07-01

    This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.

  17. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  18. New Methods and Transducer Designs for Ultrasonic Diagnostics and Therapy

    NASA Astrophysics Data System (ADS)

    Rybyanets, A. N.; Naumenko, A. A.; Sapozhnikov, O. A.; Khokhlova, V. A.

    Recent advances in the field of physical acoustics, imaging technologies, piezoelectric materials, and ultrasonic transducer design have led to emerging of novel methods and apparatus for ultrasonic diagnostics, therapy and body aesthetics. The paper presents the results on development and experimental study of different high intensity focused ultrasound (HIFU) transducers. Technological peculiarities of the HIFU transducer design as well as theoretical and numerical models of such transducers and the corresponding HIFU fields are discussed. Several HIFU transducers of different design have been fabricated using different advanced piezoelectric materials. Acoustic field measurements for those transducers have been performed using a calibrated fiber optic hydrophone and an ultrasonic measurement system (UMS). The results of ex vivo experiments with different tissues as well as in vivo experiments with blood vessels are presented that prove the efficacy, safety and selectivity of the developed HIFU transducers and methods.

  19. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  20. Obtaining Valid Response Rates: Considerations beyond the Tailored Design Method.

    ERIC Educational Resources Information Center

    Huang, Judy Y.; Hubbard, Susan M.; Mulvey, Kevin P.

    2003-01-01

    Reports on the use of the tailored design method (TDM) to achieve high survey response in two separate studies of the dissemination of Treatment Improvement Protocols (TIPs). Findings from these two studies identify six factors may have influenced nonresponse, and show that use of TDM does not, in itself, guarantee a high response rate. (SLD)

  1. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Designation of noise description...

  2. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Designation of noise description...

  3. Impact design methods for ceramic components in gas turbine engines

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1991-01-01

    Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.

  4. Designs and Methods in School Improvement Research: A Systematic Review

    ERIC Educational Resources Information Center

    Feldhoff, Tobias; Radisch, Falk; Bischof, Linda Marie

    2016-01-01

    Purpose: The purpose of this paper is to focus on challenges faced by longitudinal quantitative analyses of school improvement processes and offers a systematic literature review of current papers that use longitudinal analyses. In this context, the authors assessed designs and methods that are used to analyze the relation between school…

  5. Polypharmacology: in silico methods of ligand design and development.

    PubMed

    McKie, Samuel A

    2016-04-01

    How to design a ligand to bind multiple targets, rather than to a single target, is the focus of this review. Rational polypharmacology draws on knowledge that is both broad ranging and hierarchical. Computer-aided multitarget ligand design methods are described according to their nested knowledge level. Ligand-only and then receptor-ligand strategies are first described; followed by the metabolic network viewpoint. Subsequently strategies that view infectious diseases as multigenomic targets are discussed, and finally the disease level interpretation of medicinal therapy is considered. As yet there is no consensus on how best to proceed in designing a multitarget ligand. The current methodologies are bought together in an attempt to give a practical overview of how polypharmacology design might be best initiated. PMID:27105127

  6. Polypharmacology: in silico methods of ligand design and development.

    PubMed

    McKie, Samuel A

    2016-04-01

    How to design a ligand to bind multiple targets, rather than to a single target, is the focus of this review. Rational polypharmacology draws on knowledge that is both broad ranging and hierarchical. Computer-aided multitarget ligand design methods are described according to their nested knowledge level. Ligand-only and then receptor-ligand strategies are first described; followed by the metabolic network viewpoint. Subsequently strategies that view infectious diseases as multigenomic targets are discussed, and finally the disease level interpretation of medicinal therapy is considered. As yet there is no consensus on how best to proceed in designing a multitarget ligand. The current methodologies are bought together in an attempt to give a practical overview of how polypharmacology design might be best initiated.

  7. Guidance for using mixed methods design in nursing practice research.

    PubMed

    Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia

    2016-08-01

    The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research. PMID:27397810

  8. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  9. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  10. Mixed methods research: a design for emergency care research?

    PubMed

    Cooper, Simon; Porter, Jo; Endacott, Ruth

    2011-08-01

    This paper follows previous publications on generic qualitative approaches, qualitative designs and action research in emergency care by this group of authors. Contemporary views on mixed methods approaches are considered, with a particular focus on the design choice and the amalgamation of qualitative and quantitative data emphasising the timing of data collection for each approach, their relative 'weight' and how they will be mixed. Mixed methods studies in emergency care are reviewed before the variety of methodological approaches and best practice considerations are presented. The use of mixed methods in clinical studies is increasing, aiming to answer questions such as 'how many' and 'why' in the same study, and as such are an important and useful approach to many key questions in emergency care.

  11. Behavioral response to contamination risk information in a spatially explicit groundwater environment: Experimental evidence

    NASA Astrophysics Data System (ADS)

    Li, Jingyuan; Michael, Holly A.; Duke, Joshua M.; Messer, Kent D.; Suter, Jordan F.

    2014-08-01

    This paper assesses the effectiveness of aquifer monitoring information in achieving more sustainable use of a groundwater resource in the absence of management policy. Groundwater user behavior in the face of an irreversible contamination threat is studied by applying methods of experimental economics to scenarios that combine a physics-based, spatially explicit, numerical groundwater model with different representations of information about an aquifer and its risk of contamination. The results suggest that the threat of catastrophic contamination affects pumping decisions: pumping is significantly reduced in experiments where contamination is possible compared to those where pumping cost is the only factor discouraging groundwater use. The level of information about the state of the aquifer also affects extraction behavior. Pumping rates differ when information that synthesizes data on aquifer conditions (a "risk gauge") is provided, despite invariant underlying economic incentives, and this result does not depend on whether the risk information is location-specific or from a whole aquifer perspective. Interestingly, users increase pumping when the risk gauge signals good aquifer status compared to a no-gauge treatment. When the gauge suggests impending contamination, however, pumping declines significantly, resulting in a lower probability of contamination. The study suggests that providing relatively simple aquifer condition guidance derived from monitoring data can lead to more sustainable use of groundwater resources.

  12. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  13. Current methods of epitope identification for cancer vaccine design.

    PubMed

    Cherryholmes, Gregory A; Stanton, Sasha E; Disis, Mary L

    2015-12-16

    The importance of the immune system in tumor development and progression has been emerging in many cancers. Previous cancer vaccines have not shown long-term clinical benefit possibly because were not designed to avoid eliciting regulatory T-cell responses that inhibit the anti-tumor immune response. This review will examine different methods of identifying epitopes derived from tumor associated antigens suitable for immunization and the steps used to design and validate peptide epitopes to improve efficacy of anti-tumor peptide-based vaccines. Focusing on in silico prediction algorithms, we survey the advantages and disadvantages of current cancer vaccine prediction tools.

  14. Material Design, Selection, and Manufacturing Methods for System Sustainment

    SciTech Connect

    David Sowder, Jim Lula, Curtis Marshall

    2010-02-18

    This paper describes a material selection and validation process proven to be successful for manufacturing high-reliability long-life product. The National Secure Manufacturing Center business unit of the Kansas City Plant (herein called KCP) designs and manufactures complex electrical and mechanical components used in extreme environments. The material manufacturing heritage is founded in the systems design to manufacturing practices that support the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA). Material Engineers at KCP work with the systems designers to recommend materials, develop test methods, perform analytical analysis of test data, define cradle to grave needs, present final selection and fielding. The KCP material engineers typically will maintain cost control by utilizing commercial products when possible, but have the resources and to develop and produce unique formulations as necessary. This approach is currently being used to mature technologies to manufacture materials with improved characteristics using nano-composite filler materials that will enhance system design and production. For some products the engineers plan and carry out science-based life-cycle material surveillance processes. Recent examples of the approach include refurbished manufacturing of the high voltage power supplies for cockpit displays in operational aircraft; dry film lubricant application to improve bearing life for guided munitions gyroscope gimbals, ceramic substrate design for electrical circuit manufacturing, and tailored polymeric materials for various systems. The following examples show evidence of KCP concurrent design-to-manufacturing techniques used to achieve system solutions that satisfy or exceed demanding requirements.

  15. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  16. Docking methods for structure-based library design.

    PubMed

    Cavasotto, Claudio N; Phatak, Sharangdhar S

    2011-01-01

    The drug discovery process mainly relies on the experimental high-throughput screening of huge compound libraries in their pursuit of new active compounds. However, spiraling research and development costs and unimpressive success rates have driven the development of more rational, efficient, and cost-effective methods. With the increasing availability of protein structural information, advancement in computational algorithms, and faster computing resources, in silico docking-based methods are increasingly used to design smaller and focused compound libraries in order to reduce screening efforts and costs and at the same time identify active compounds with a better chance of progressing through the optimization stages. This chapter is a primer on the various docking-based methods developed for the purpose of structure-based library design. Our aim is to elucidate some basic terms related to the docking technique and explain the methodology behind several docking-based library design methods. This chapter also aims to guide the novice computational practitioner by laying out the general steps involved for such an exercise. Selected successful case studies conclude this chapter. PMID:20981523

  17. Docking methods for structure-based library design.

    PubMed

    Cavasotto, Claudio N; Phatak, Sharangdhar S

    2011-01-01

    The drug discovery process mainly relies on the experimental high-throughput screening of huge compound libraries in their pursuit of new active compounds. However, spiraling research and development costs and unimpressive success rates have driven the development of more rational, efficient, and cost-effective methods. With the increasing availability of protein structural information, advancement in computational algorithms, and faster computing resources, in silico docking-based methods are increasingly used to design smaller and focused compound libraries in order to reduce screening efforts and costs and at the same time identify active compounds with a better chance of progressing through the optimization stages. This chapter is a primer on the various docking-based methods developed for the purpose of structure-based library design. Our aim is to elucidate some basic terms related to the docking technique and explain the methodology behind several docking-based library design methods. This chapter also aims to guide the novice computational practitioner by laying out the general steps involved for such an exercise. Selected successful case studies conclude this chapter.

  18. Application of the CSCM method to the design of wedge cavities. [Conservative Supra Characteristic Method

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.

    1987-01-01

    This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.

  19. Risk-informed regulation and safety management of nuclear power plants--on the prevention of severe accidents.

    PubMed

    Himanen, Risto; Julin, Ari; Jänkälä, Kalle; Holmberg, Jan-Erik; Virolainen, Reino

    2012-11-01

    There are four operating nuclear power plant (NPP) units in Finland. The Teollisuuden Voima (TVO) power company has two 840 MWe BWR units supplied by Asea-Atom at the Olkiluoto site. The Fortum corporation (formerly IVO) has two 500 MWe VVER 440/213 units at the Loviisa site. In addition, a 1600 MWe European Pressurized Water Reactor supplied by AREVA NP (formerly the Framatome ANP--Siemens AG Consortium) is under construction at the Olkiluoto site. Recently, the Finnish Parliament ratified the government Decision in Principle that the utilities' applications to build two new NPP units are in line with the total good of the society. The Finnish utilities, Fenno power company, and TVO company are in progress of qualifying the type of the new nuclear builds. In Finland, risk-informed applications are formally integrated in the regulatory process of NPPs that are already in the early design phase and these are to run through the construction and operation phases all through the entire plant service time. A plant-specific full-scope probabilistic risk assessment (PRA) is required for each NPP. PRAs shall cover internal events, area events (fires, floods), and external events such as harsh weather conditions and seismic events in all operating modes. Special attention is devoted to the use of various risk-informed PRA applications in the licensing of Olkiluoto 3 NPP.

  20. COMPSIZE - PRELIMINARY DESIGN METHOD FOR FIBER REINFORCED COMPOSITE STRUCTURES

    NASA Technical Reports Server (NTRS)

    Eastlake, C. N.

    1994-01-01

    The Composite Structure Preliminary Sizing program, COMPSIZE, is an analytical tool which structural designers can use when doing approximate stress analysis to select or verify preliminary sizing choices for composite structural members. It is useful in the beginning stages of design concept definition, when it is helpful to have quick and convenient approximate stress analysis tools available so that a wide variety of structural configurations can be sketched out and checked for feasibility. At this stage of the design process the stress/strain analysis does not need to be particularly accurate because any configurations tentatively defined as feasible will later be analyzed in detail by stress analysis specialists. The emphasis is on fast, user-friendly methods so that rough but technically sound evaluation of a broad variety of conceptual designs can be accomplished. Analysis equations used are, in most cases, widely known basic structural analysis methods. All the equations used in this program assume elastic deformation only. The default material selection is intermediate strength graphite/epoxy laid up in a quasi-isotropic laminate. A general flat laminate analysis subroutine is included for analyzing arbitrary laminates. However, COMPSIZE should be sufficient for most users to presume a quasi-isotropic layup and use the familiar basic structural analysis methods for isotropic materials, after estimating an appropriate elastic modulus. Homogeneous materials can be analyzed as simplified cases. The COMPSIZE program is written in IBM BASICA. The program format is interactive. It was designed on an IBM Personal Computer operating under DOS with a central memory requirement of approximately 128K. It has been implemented on an IBM compatible with GW-BASIC under DOS 3.2. COMPSIZE was developed in 1985.

  1. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... 53, as amended on August 31, 2011 (76 FR 54326-54341). The new equivalent methods are automated... beta radiation attenuation. The newly designated equivalent methods are identified as follows: EQPM-0912-204, ``Teledyne Model 602 Beta\\PLUS\\ Particle Measurement System'' and ``SWAM 5a Dual...

  2. Preliminary demonstration of a robust controller design method

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1980-01-01

    Alternative computational procedures for obtaining a feedback control law which yields a control signal based on measurable quantitites are evaluated. The three methods evaluated are: (1) the standard linear quadratic regulator design model; (2) minimization of the norm of the feedback matrix, k via nonlinear programming subject to the constraint that the closed loop eigenvalues be in a specified domain in the complex plane; and (3) maximize the angles between the closed loop eigenvectors in combination with minimizing the norm of K also via the constrained nonlinear programming. The third or robust design method was chosen to yield a closed loop system whose eigenvalues are insensitive to small changes in the A and B matrices. The relationship between orthogonality of closed loop eigenvectors and the sensitivity of closed loop eigenvalues is described. Computer programs are described.

  3. Helicopter flight-control design using an H(2) method

    NASA Technical Reports Server (NTRS)

    Takahashi, Marc D.

    1991-01-01

    Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.

  4. National Tuberculosis Genotyping and Surveillance Network: Design and Methods

    PubMed Central

    Braden, Christopher R.; Schable, Barbara A.; Onorato, Ida M.

    2002-01-01

    The National Tuberculosis Genotyping and Surveillance Network was established in 1996 to perform a 5-year, prospective study of the usefulness of genotyping Mycobacterium tuberculosis isolates to tuberculosis control programs. Seven sentinel sites identified all new cases of tuberculosis, collected information on patients and contacts, and obtained patient isolates. Seven genotyping laboratories performed DNA fingerprinting analysis by the international standard IS6110 method. BioImage Whole Band Analyzer software was used to analyze patterns, and distinct patterns were assigned unique designations. Isolates with six or fewer bands on IS6110 patterns were also spoligotyped. Patient data and genotyping designations were entered in a relational database and merged with selected variables from the national surveillance database. In two related databases, we compiled the results of routine contact investigations and the results of investigations of the relationships of patients who had isolates with matching genotypes. We describe the methods used in the study. PMID:12453342

  5. Optical design and active optics methods in astronomy

    NASA Astrophysics Data System (ADS)

    Lemaitre, Gerard R.

    2013-03-01

    Optical designs for astronomy involve implementation of active optics and adaptive optics from X-ray to the infrared. Developments and results of active optics methods for telescopes, spectrographs and coronagraph planet finders are presented. The high accuracy and remarkable smoothness of surfaces generated by active optics methods also allow elaborating new optical design types with high aspheric and/or non-axisymmetric surfaces. Depending on the goal and performance requested for a deformable optical surface analytical investigations are carried out with one of the various facets of elasticity theory: small deformation thin plate theory, large deformation thin plate theory, shallow spherical shell theory, weakly conical shell theory. The resulting thickness distribution and associated bending force boundaries can be refined further with finite element analysis.

  6. A Requirements-Driven Optimization Method for Acoustic Treatment Design

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2016-01-01

    Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.

  7. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  8. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  9. Improve emergency light design with lumens/sq ft method.

    PubMed

    Sieron, R L

    1981-05-01

    In summary, the "Lumens/sq ft Method" outlined here is proposed as a guideline for designing emergency lighting systems such as in the accompanying examples. With this method, the total lumens delivered by the emergency lighting units in the area is divided by the floor area (in sq ft) to yield a figure of merit. The author proposes that a range from 0.25 to 1.0 lumens/sq ft be specified for emergency lighting. The lower value may be used for non-critical areas (for example, warehouses), while the higher value would be used for areas such as school corridors and hospitals.

  10. Synthesis of aircraft structures using integrated design and analysis methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Goetz, R. C.

    1978-01-01

    A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.

  11. Bayesian methods for design and analysis of safety trials.

    PubMed

    Price, Karen L; Xia, H Amy; Lakshminarayanan, Mani; Madigan, David; Manner, David; Scott, John; Stamey, James D; Thompson, Laura

    2014-01-01

    Safety assessment is essential throughout medical product development. There has been increased awareness of the importance of safety trials recently, in part due to recent US Food and Drug Administration guidance related to thorough assessment of cardiovascular risk in the treatment of type 2 diabetes. Bayesian methods provide great promise for improving the conduct of safety trials. In this paper, the safety subteam of the Drug Information Association Bayesian Scientific Working Group evaluates challenges associated with current methods for designing and analyzing safety trials and provides an overview of several suggested Bayesian opportunities that may increase efficiency of safety trials along with relevant case examples.

  12. Asymmetric MRI magnet design using a hybrid numerical method.

    PubMed

    Zhao, H; Crozier, S; Doddrell, D M

    1999-12-01

    This paper describes a hybrid numerical method for the design of asymmetric magnetic resonance imaging magnet systems. The problem is formulated as a field synthesis and the desired current density on the surface of a cylinder is first calculated by solving a Fredholm equation of the first kind. Nonlinear optimization methods are then invoked to fit practical magnet coils to the desired current density. The field calculations are performed using a semi-analytical method. A new type of asymmetric magnet is proposed in this work. The asymmetric MRI magnet allows the diameter spherical imaging volume to be positioned close to one end of the magnet. The main advantages of making the magnet asymmetric include the potential to reduce the perception of claustrophobia for the patient, better access to the patient by attending physicians, and the potential for reduced peripheral nerve stimulation due to the gradient coil configuration. The results highlight that the method can be used to obtain an asymmetric MRI magnet structure and a very homogeneous magnetic field over the central imaging volume in clinical systems of approximately 1.2 m in length. Unshielded designs are the focus of this work. This method is flexible and may be applied to magnets of other geometries.

  13. Design Methods for Load-bearing Elements from Crosslaminated Timber

    NASA Astrophysics Data System (ADS)

    Vilguts, A.; Serdjuks, D.; Goremikins, V.

    2015-11-01

    Cross-laminated timber is an environmentally friendly material, which possesses a decreased level of anisotropy in comparison with the solid and glued timber. Cross-laminated timber could be used for load-bearing walls and slabs of multi-storey timber buildings as well as decking structures of pedestrian and road bridges. Design methods of cross-laminated timber elements subjected to bending and compression with bending were considered. The presented methods were experimentally validated and verified by FEM. Two cross-laminated timber slabs were tested at the action of static load. Pine wood was chosen as a board's material. Freely supported beam with the span equal to 1.9 m, which was loaded by the uniformly distributed load, was a design scheme of the considered plates. The width of the plates was equal to 1 m. The considered cross-laminated timber plates were analysed by FEM method. The comparison of stresses acting in the edge fibres of the plate and the maximum vertical displacements shows that both considered methods can be used for engineering calculations. The difference between the results obtained experimentally and analytically is within the limits from 2 to 31%. The difference in results obtained by effective strength and stiffness and transformed sections methods was not significant.

  14. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  15. A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.

    SciTech Connect

    YUE,M.; SCHLUETER,R.

    2003-10-20

    A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.

  16. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.

  17. Design and implementation of visualization methods for the CHANGES Spatial Decision Support System

    NASA Astrophysics Data System (ADS)

    Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan

    2014-05-01

    The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison

  18. Treatment of Passive Component Reliability in Risk-Informed Safety Margin Characterization FY 2010 Report

    SciTech Connect

    Robert W Youngblood

    2010-09-01

    The Risk-Informed Safety Margin Characterization (RISMC) pathway is a set of activities defined under the U.S. Department of Energy (DOE) Light Water Reactor Sustainability Program. The overarching objective of RISMC is to support plant life-extension decision-making by providing a state-of-knowledge characterization of safety margins in key systems, structures, and components (SSCs). A technical challenge at the core of this effort is to establish the conceptual and technical feasibility of analyzing safety margin in a risk-informed way, which, unlike conventionally defined deterministic margin analysis, is founded on probabilistic characterizations of SSC performance.

  19. Examining trust factors in online food risk information: The case of unpasteurized or 'raw' milk.

    PubMed

    Sillence, Elizabeth; Hardy, Claire; Medeiros, Lydia C; LeJeune, Jeffrey T

    2016-04-01

    The internet has become an increasingly important way of communicating with consumers about food risk information. However, relatively little is known about how consumers evaluate and come to trust the information they encounter online. Using the example of unpasteurized or raw milk this paper presents two studies exploring the trust factors associated with online information about the risks and benefits of raw milk consumption. In the first study, eye-tracking data was collected from 33 pasteurised milk consumers whilst they viewed six different milk related websites. A descriptive analysis of the eye-tracking data was conducted to explore viewing patterns. Reports revealed the importance of images as a way of capturing initial attention and foregrounding other features and highlighted the significance of introductory text within a homepage. In the second, qualitative study, 41 consumers, some of whom drank raw milk, viewed a selection of milk related websites before participating in either a group discussion or interview. Seventeen of the participants also took part in a follow up telephone interview 2 weeks later. The qualitative data supports the importance of good design whilst noting that balance, authorship agenda, the nature of evidence and personal relevance were also key factors affecting consumers trust judgements. The results of both studies provide support for a staged approach to online trust in which consumers engage in a more rapid, heuristic assessment of a site before moving on to a more in-depth evaluation of the information available. Findings are discussed in relation to the development of trustworthy online food safety resources. PMID:26792772

  20. Bayesian methods for the design and analysis of noninferiority trials.

    PubMed

    Gamalo-Siebers, Margaret; Gao, Aijun; Lakshminarayanan, Mani; Liu, Guanghan; Natanegara, Fanni; Railkar, Radha; Schmidli, Heinz; Song, Guochen

    2016-01-01

    The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.

  1. Improved Method of Design for Folding Inflatable Shells

    NASA Technical Reports Server (NTRS)

    Johnson, Christopher J.

    2009-01-01

    An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One particularly difficult problem is that of mathematically defining fold lines on a gore pattern in a double- curvature region. Moreover, because the fold lines in a double-curvature region tend to be curved, there is a practical problem of how to implement the folds. Another problem is that of modifying the basic gore shapes and sizes for the various layers so that when they are folded as part of the integral structure, they do not mechanically interfere with each other at the fold lines. Heretofore, it has been a common practice to design an inflatable shell to be assembled in the deployed configuration, without regard for the need to fold it into compact form. Typically, the result has been that folding has been a difficult, time-consuming process resulting in a An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One

  2. Risk-Informed Safety Margin Characterization Case Study: Selection of Electrical Equipment to Be Subjected to Environmental Qualification

    SciTech Connect

    D. P. Blanchard; R. W. Youngblood

    2014-06-01

    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the DOE’s Light Water Reactor Sustainability (LWRS) program focuses on advancing the state of the art in safety analysis and risk assessment to support decision-making on nuclear power plant operation well beyond the originally designed lifetime of the plants (i.e., beyond 60 years). Among the issues being addressed in RISMC is the significance of SSC aging and how confident we are about our understanding of its impact on the margin between the loads SSCs are expected to see during normal operation and accident conditions, and the SSC capacities (their ability to resist those loads) as the SSCs age. In this paper, a summary is provided of a case study that examines SSC aging from an environmental qualification (EQ) perspective. The case study illustrates how the state of knowledge regarding SSC margin can be characterized given the overall integrated plant design, and was developed to demonstrate a method for deciding on which cables to focus, which cables are not so important from an environmental qualification margin standpoint, and what plant design features or operating characteristics determine the role that environmental qualification plays in establishing a safety case on which decisions regarding margin can be made. The selection of cables for which demonstration of margin with respect to aging and environmental challenges uses a technique known as Prevention Analysis. Prevention Analysis is a Boolean method for optimal selection of SSCs (that is, those combinations of SSCs both necessary and sufficient to meet a predetermined selection criterion) in a manner that allows demonstration that plant-level safety can be demonstrated by the collection of selected SSCs alone. Choosing the set of SSCs that is necessary and sufficient to satisfy the safety objectives, and demonstrating that the safety objectives can be met effectively, determines where resources are best allocated to assure SSC

  3. A geometric design method for side-stream distillation columns

    SciTech Connect

    Rooks, R.E.; Malone, M.F.; Doherty, M.F.

    1996-10-01

    A side-stream distillation column may replace two simple columns for some applications, sometimes at considerable savings in energy and investment. This paper describes a geometric method for the design of side-stream columns; the method provides rapid estimates of equipment size and utility requirements. Unlike previous approaches, the geometric method is applicable to nonideal and azeotropic mixtures. Several example problems for both ideal and nonideal mixtures, including azeotropic mixtures containing distillation boundaries, are given. The authors make use of the fact that azeotropes or pure components whose classification in the residue curve map is a saddle can be removed as side-stream products. Significant process simplifications are found among some alternatives in example problems, leading to flow sheets with fewer units and a substantial savings in vapor rate.

  4. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  5. Sequence design in lattice models by graph theoretical methods

    NASA Astrophysics Data System (ADS)

    Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.

    2001-01-01

    A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).

  6. A kind of optimizing design method of progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Tang, Yunhai; Qian, Lin; Wu, Quanying; Yu, Jingchi; Chen, Hao; Wang, Yuanyuan

    2010-10-01

    Progressive addition lenses are a kind of ophthalmic lenses with freeform surface. The surface curvature of the progressive addition lenses varies gradually from a minimum value in the upper, distance-viewing area, to a maximum value in the lower, near-viewing area. A kind of optimizing design method of progressive addition lenses is proposed to improve the optical quality by modifying the vector heights of the surface of designed progressive addition lenses initially. The relationship among mean power, cylinder power and the vector heights of the surface is deduced, and the optimizing factor is also gained. The vector heights of the surface of designed progressive addition lenses initially are used to calculate the plots of mean power and cylinder power based on the principle of differential geometry. The mean power plot is changed by adjusting the optimizing factor. Otherwise, the novel plot of the mean power can also be derived by shifting the mean power of one selected region to another and then by interpolating and smoothing. A partial differential equation of the elliptic type is founded based on the changed mean power. The solution of the equation is achieved by iterative method. The optimized vector heights of the surface are solved out. Compared with the original lens, the region in which the astigmatism near the nasal side on distance-vision portion is less than 0.5 D has become broader, and the clear regions on distance-vision and near-vision portion are wider.

  7. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  8. Modified method to improve the design of Petlyuk distillation columns

    PubMed Central

    2014-01-01

    Background A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. Results The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. Conclusions The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads. PMID:25061476

  9. Rapid and simple method of qPCR primer design.

    PubMed

    Thornton, Brenda; Basu, Chhandak

    2015-01-01

    Quantitative real-time polymerase chain reaction (qPCR) is a powerful tool for analysis and quantification of gene expression. It is advantageous compared to traditional gel-based method of PCR, as gene expression can be visualized "real-time" using a computer. In qPCR, a reporter dye system is used which intercalates with DNA's region of interest and detects DNA amplification. Some of the popular reporter systems used in qPCR are the following: Molecular Beacon(®), SYBR Green(®), and Taqman(®). However, success of qPCR depends on the optimal primers used. Some of the considerations for primer design are the following: GC content, primer self-dimer, or secondary structure formation. Freely available software could be used for ideal qPCR primer design. Here we have shown how to use some freely available web-based software programs (such as Primerquest(®), Unafold(®), and Beacon designer(®)) to design qPCR primers.

  10. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  11. Collocation methods for distillation design. 1: Model description and testing

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    Fast and accurate distillation design requires a model that significantly reduces the problem size while accurately approximating a full-order distillation column model. This collocation model builds on the concepts of past collocation models for design of complex real-world separation systems. Two variable transformations make this method unique. Polynomials cannot accurately fit trajectories which flatten out. In columns, flat sections occur in the middle of large column sections or where concentrations go to 0 or 1. With an exponential transformation of the tray number which maps zero to an infinite number of trays onto the range 0--1, four collocation trays can accurately simulate a large column section. With a hyperbolic tangent transformation of the mole fractions, the model can simulate columns which reach high purities. Furthermore, this model uses multiple collocation elements for a column section, which is more accurate than a single high-order collocation section.

  12. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  13. Collocation methods for distillation design. 2: Applications for distillation

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.

  14. Property Exchange Method for Designing Computer-Based Learning Game

    NASA Astrophysics Data System (ADS)

    Umetsu, Takanobu; Hirashima, Tsukasa

    Motivation is one of the most important factors in learning. Many researchers of learning environments, therefore, pay special attention to learning games as a remarkable approach to realize highly motivated learning. However, to make a learning game is not easy task. Although there are several investigations for design methods of learning games, most of them are only proposals of guidelines for the design or characteristics that learning games should have. Therefore, developers of learning games are required to have enough knowledge and experiences regarding learning and games in order to understand the guidelines or to deal with the characteristics. Then, it is very difficult for teachers to obtain learning games fitting for their learning issues.

  15. A Method for Designing CDO Conformed to Investment Parameters

    NASA Astrophysics Data System (ADS)

    Nakae, Tatsuya; Moritsu, Toshiyuki; Komoda, Norihisa

    We propose a method for designing CDO (Collateralized Debt Obligation) that meets investor needs about attributes of CDO. It is demonstrated that adjusting attributes (that are credit capability and issue amount) of CDO to investors' preferences causes a capital loss risk that the agent takes. We formulate a CDO optimization problem by defining an objective function using the above risk and by setting constraints that arise from investor needs and a risk premium that is paid for the agent. Our prototype experiment, in which fictitious underlying obligations and investor needs are given, verifies that CDOs can be designed without opportunity loss and dead stock loss, and that the capital loss is not more than thousandth part of the amount of annual payment under guarantee for small and midium-sized enterprises by a general credit guarantee institution.

  16. CHARACTERIZATION OF DATA VARIABILITY AND UNCERTAINTY: HEALTH EFFECTS ASSESSMENTS IN THE INTEGRATED RISK INFORMATION SYSTEM (IRIS)

    EPA Science Inventory

    In response to a Congressional directive contained in HR 106-379 regarding EPA's appropriations for FY2000, EPA has undertaken an evaluation of the characterization of data variability and uncertainty in its Integrated Risk Information System (IRIS) health effects information dat...

  17. Fault Management in an Objectives-Based/Risk-Informed View of Safety and Mission Success

    NASA Technical Reports Server (NTRS)

    Groen, Frank

    2012-01-01

    Theme of this talk: (1) Net-benefit of activities and decisions derives from objectives (and their priority) -- similarly: need for integration, value of technology/capability. (2) Risk is a lack of confidence that objectives will be met. (2a) Risk-informed decision making requires objectives. (3) Consideration of objectives is central to recent guidance.

  18. DSSTox EPA Integrated Risk Information System Structure-Index Locator File: SDF File and Documentation

    EPA Science Inventory

    EPA's Integrated Risk Information System (IRIS) database was developed and is maintained by EPA's Office of Research and Developement, National Center for Environmental Assessment. IRIS is a database of human health effects that may result from exposure to various substances fou...

  19. 76 FR 13402 - Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-11

    ... AGENCY Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches for... literature searches for IRIS assessments; request for information. SUMMARY: The U.S. Environmental Protection... the literature search results and submit additional information to EPA. Request for Public...

  20. 75 FR 76982 - Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-10

    ... AGENCY Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches for... literature searches for IRIS assessments; request for information. SUMMARY: The U.S. Environmental Protection... ). The public is invited to review the literature search results and submit additional information to...

  1. 77 FR 41784 - Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... AGENCY Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches for... literature search for benzo(a)pyrene; request for information. SUMMARY: The U.S. Environmental Protection... the literature search results and submit additional information to EPA. Request for Public...

  2. 75 FR 25239 - Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ... AGENCY Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches for... of literature searches for IRIS assessments; request for information. SUMMARY: The U.S. Environmental... literature search results and submit additional information to EPA. Request for Public Involvement in...

  3. 77 FR 20817 - Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-06

    ... AGENCY Integrated Risk Information System (IRIS); Announcement of Availability of Literature Searches for... literature searches for IRIS assessments; request for information. SUMMARY: The U.S. Environmental Protection... the literature search results and submit additional information to EPA. EPA recently added...

  4. The Effect of Genetic Risk Information and Health Risk Assessment on Compliance with Preventive Behaviors.

    ERIC Educational Resources Information Center

    Bamberg, Richard; And Others

    1990-01-01

    Results from a study of 82 males provide no statistical support and limited encouragement that genetic risk information may motivate persons to make positive changes in preventive health behaviors. Health risk assessments were used to identify subjects at risk for coronary heart disease or lung cancer because of genetic factors. (IAH)

  5. Design of braided composite tubes by numerical analysis method

    SciTech Connect

    Hamada, Hiroyuki; Fujita, Akihiro; Maekawa, Zenichiro; Nakai, Asami; Yokoyama, Atsushi

    1995-11-01

    Conventional composite laminates have very poor strength through thickness and as a result are limited in their application for structural parts with complex shape. In this paper, the design for braided composite tube was proposed. The concept of analysis model which involved from micro model to macro model was presented. This method was applied to predict bending rigidity and initial fracture stress under bending load of the braided tube. The proposed analytical procedure can be included as a unit in CAE system for braided composites.

  6. Methods to Design and Synthesize Antibody-Drug Conjugates (ADCs)

    PubMed Central

    Yao, Houzong; Jiang, Feng; Lu, Aiping; Zhang, Ge

    2016-01-01

    Antibody-drug conjugates (ADCs) have become a promising targeted therapy strategy that combines the specificity, favorable pharmacokinetics and biodistributions of antibodies with the destructive potential of highly potent drugs. One of the biggest challenges in the development of ADCs is the application of suitable linkers for conjugating drugs to antibodies. Recently, the design and synthesis of linkers are making great progress. In this review, we present the methods that are currently used to synthesize antibody-drug conjugates by using thiols, amines, alcohols, aldehydes and azides. PMID:26848651

  7. A Method of Trajectory Design for Manned Asteroids Exploration

    NASA Astrophysics Data System (ADS)

    Gan, Q. B.; Zhang, Y.; Zhu, Z. F.; Han, W. H.; Dong, X.

    2014-11-01

    A trajectory optimization method of the nuclear propulsion manned asteroids exploration is presented. In the case of launching between 2035 and 2065, based on the Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory in the feasible regions is selected by pruning the flight sequences. Setting the nuclear propulsion flight plan as propel-coast-propel, and taking the minimal mass of aircraft departure as the index, the nuclear propulsion flight trajectory is separately optimized using a hybrid method. With the initial value of the optimized local parameters of each three phases, the global parameters are jointedly optimized. At last, the minimal departure mass trajectory design result is given.

  8. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  9. Cox regression methods for two-stage randomization designs.

    PubMed

    Lokhnygina, Yuliya; Helterbrand, Jeffrey D

    2007-06-01

    Two-stage randomization designs (TSRD) are becoming increasingly common in oncology and AIDS clinical trials as they make more efficient use of study participants to examine therapeutic regimens. In these designs patients are initially randomized to an induction treatment, followed by randomization to a maintenance treatment conditional on their induction response and consent to further study treatment. Broader acceptance of TSRDs in drug development may hinge on the ability to make appropriate intent-to-treat type inference within this design framework as to whether an experimental induction regimen is better than a standard induction regimen when maintenance treatment is fixed. Recently Lunceford, Davidian, and Tsiatis (2002, Biometrics 58, 48-57) introduced an inverse probability weighting based analytical framework for estimating survival distributions and mean restricted survival times, as well as for comparing treatment policies at landmarks in the TSRD setting. In practice Cox regression is widely used and in this article we extend the analytical framework of Lunceford et al. (2002) to derive a consistent estimator for the log hazard in the Cox model and a robust score test to compare treatment policies. Large sample properties of these methods are derived, illustrated via a simulation study, and applied to a TSRD clinical trial. PMID:17425633

  10. An introduction to quantum chemical methods applied to drug design.

    PubMed

    Stenta, Marco; Dal Peraro, Matteo

    2011-06-01

    The advent of molecular medicine allowed identifying the malfunctioning of subcellular processes as the source of many diseases. Since then, drugs are not only discovered, but actually designed to fulfill a precise task. Modern computational techniques, based on molecular modeling, play a relevant role both in target identification and drug lead development. By flanking and integrating standard experimental techniques, modeling has proven itself as a powerful tool across the drug design process. The success of computational methods depends on a balance between cost (computation time) and accuracy. Thus, the integration of innovative theories and more powerful hardware architectures allows molecular modeling to be used as a reliable tool for rationalizing the results of experiments and accelerating the development of new drug design strategies. We present an overview of the most common quantum chemistry computational approaches, providing for each one a general theoretical introduction to highlight limitations and strong points. We then discuss recent developments in software and hardware resources, which have allowed state-of-the-art of computational quantum chemistry to be applied to drug development.

  11. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  12. Simplified design method for shear-valve magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Ding, Yang; Zhang, Lu; Zhu, Haitao; Li, Zhongxian

    2014-12-01

    Based on the Bingham parallel-plate model, a simplified design method of shear-valve magnetorheological (MR) dampers is proposed considering the magnetic circuit optimization. Correspondingly, a new MR damper with a full-length effective damping path is proposed. The prototype dampers are also fabricated and studied numerically and experimentally. According to the test results, the Bingham parallel-plate model is further modified to obtain a damping force prediction model of the proposed MR dampers. This prediction model considers the magnetic saturation phenomenon. The study indicates that the proposed simplified design method is simple, effective and reliable. The maximum damping force of the proposed MR dampers with a full-length effective damping path is at least twice as large as those of conventional MR dampers. The dynamic range of damping force increases by at least 70%. The proposed damping force prediction model considers the magnetic saturation phenomenon and it can realize the actual characteristic of MR fluids. The model is able to predict the actual damping force of MR dampers precisely.

  13. Development of Analysis Methods for Designing with Composites

    NASA Technical Reports Server (NTRS)

    Madenci, E.

    1999-01-01

    The project involved the development of new analysis methods to achieve efficient design of composite structures. We developed a complex variational formulation to analyze the in-plane and bending coupling response of an unsymmetrically laminated plate with an elliptical cutout subjected to arbitrary edge loading as shown in Figure 1. This formulation utilizes four independent complex potentials that satisfy the coupled in-plane and bending equilibrium equations, thus eliminating the area integrals from the strain energy expression. The solution to a finite geometry laminate under arbitrary loading is obtained by minimizing the total potential energy function and solving for the unknown coefficients of the complex potentials. The validity of this approach is demonstrated by comparison with finite element analysis predictions for a laminate with an inclined elliptical cutout under bi-axial loading.The geometry and loading of this laminate with a lay-up of [-45/45] are shown in Figure 2. The deformed configuration shown in Figure 3 reflects the presence of bending-stretching coupling. The validity of the present method is established by comparing the out-of-plane deflections along the boundary of the elliptical cutout from the present approach with those of the finite element method. The comparison shown in Figure 4 indicates remarkable agreement. The details of this method are described in a manuscript by Madenci et al. (1998).

  14. A New Aerodynamic Data Dispersion Method for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T.

    2011-01-01

    A novel method for implementing aerodynamic data dispersion analysis is herein introduced. A general mathematical approach combined with physical modeling tailored to the aerodynamic quantity of interest enables the generation of more realistically relevant dispersed data and, in turn, more reasonable flight simulation results. The method simultaneously allows for the aerodynamic quantities and their derivatives to be dispersed given a set of non-arbitrary constraints, which stresses the controls model in more ways than with the traditional bias up or down of the nominal data within the uncertainty bounds. The adoption and implementation of this new method within the NASA Ares I Crew Launch Vehicle Project has resulted in significant increases in predicted roll control authority, and lowered the induced risks for flight test operations. One direct impact on launch vehicles is a reduced size for auxiliary control systems, and the possibility of an increased payload. This technique has the potential of being applied to problems in multiple areas where nominal data together with uncertainties are used to produce simulations using Monte Carlo type random sampling methods. It is recommended that a tailored physics-based dispersion model be delivered with any aerodynamic product that includes nominal data and uncertainties, in order to make flight simulations more realistic and allow for leaner spacecraft designs.

  15. Nanobiological studies on drug design using molecular mechanic method

    PubMed Central

    Ghaheh, Hooria Seyedhosseini; Mousavi, Maryam; Araghi, Mahmood; Rasoolzadeh, Reza; Hosseini, Zahra

    2015-01-01

    Background: Influenza H1N1 is very important worldwide and point mutations that occur in the virus gene are a threat for the World Health Organization (WHO) and druggists, since they could make this virus resistant to the existing antibiotics. Influenza epidemics cause severe respiratory illness in 30 to 50 million people and kill 250,000 to 500,000 people worldwide every year. Nowadays, drug design is not done through trial and error because of its cost and waste of time; therefore bioinformatics studies is essential for designing drugs. Materials and Methods: This paper, infolds a study on binding site of Neuraminidase (NA) enzyme, (that is very important in drug design) in 310K temperature and different dielectrics, for the best drug design. Information of NA enzyme was extracted from Protein Data Bank (PDB) and National Center for Biotechnology Information (NCBI) websites. The new sequences of N1 were downloaded from the NCBI influenza virus sequence database. Drug binding sites were assimilated and homologized modeling using Argus lab 4.0, HyperChem 6.0 and Chem. D3 softwares. Their stability was assessed in different dielectrics and temperatures. Result: Measurements of potential energy (Kcal/mol) of binding sites of NA in different dielectrics and 310K temperature revealed that at time step size = 0 pSec drug binding sites have maximum energy level and at time step size = 100 pSec have maximum stability and minimum energy. Conclusions: Drug binding sites are more dependent on dielectric constants rather than on temperature and the optimum dielectric constant is 39/78. PMID:26605248

  16. IODC98 optical design problem: method of progressing from an ahromatic to an apochromatic design

    SciTech Connect

    Seppala, L.G.

    1998-07-20

    A general method of designing an apochromatic lens by using a triplet of special glasses, in which the buried surfaces concept is used, can be outlined. First, one initially chooses a starting point which is already achromatic. Second, a thick plate or shell is added to the design, where the plate or shell has an index of refraction 1.62, which is similar to the special glass triplet average index of refraction (for example: PSK53A, KZFS1 and TIF6). Third, the lens is then reoptimized to an achromatic design. Fourth, the single element is replace by the special glass triplet. Fifth, only the internal surfaces of the triplet are varied to correct all three wavelengths. Although this step will produce little improvement, it does serve to stabilize further optimization. Sixth and finally, all potential variables are used to fully optimize the apochromatic lens. Microscope objectives, for example, could be designed using this technique. The important concept to apply is the use of multiple buried surfaces in which each interface involves a special glass, after an achromatic design has been achieved. This extension relieves the restriction that all special glasses have a common index of refraction and allows a wider variety of special glasses to be used. However, it is still desirable to use glasses which form a large triangle on the P versus V diagram.

  17. An analytical filter design method for guided wave phased arrays

    NASA Astrophysics Data System (ADS)

    Kwon, Hyu-Sang; Kim, Jin-Yeon

    2016-12-01

    This paper presents an analytical method for designing a spatial filter that processes the data from an array of two-dimensional guided wave transducers. An inverse problem is defined where the spatial filter coefficients are determined in such a way that a prescribed beam shape, i.e., a desired array output is best approximated in the least-squares sense. Taking advantage of the 2π-periodicity of the generated wave field, Fourier-series representation is used to derive closed-form expressions for the constituting matrix elements. Special cases in which the desired array output is an ideal delta function and a gate function are considered in a more explicit way. Numerical simulations are performed to examine the performance of the filters designed by the proposed method. It is shown that the proposed filters can significantly improve the beam quality in general. Most notable is that the proposed method does not compromise between the main lobe width and the sidelobe levels; i.e. a narrow main lobe and low sidelobes are simultaneously achieved. It is also shown that the proposed filter can compensate the effects of nonuniform directivity and sensitivity of array elements by explicitly taking these into account in the formulation. From an example of detecting two separate targets, how much the angular resolution can be improved as compared to the conventional delay-and-sum filter is quantitatively illustrated. Lamb wave based imaging of localized defects in an elastic plate using a circular array is also presented as an example of practical applications.

  18. Learning physics: A comparative analysis between instructional design methods

    NASA Astrophysics Data System (ADS)

    Mathew, Easow

    The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p < .01) in academic performance between students who participated in traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in

  19. Formal methods in the design of Ada 1995

    NASA Technical Reports Server (NTRS)

    Guaspari, David

    1995-01-01

    Formal, mathematical methods are most useful when applied early in the design and implementation of a software system--that, at least, is the familiar refrain. I will report on a modest effort to apply formal methods at the earliest possible stage, namely, in the design of the Ada 95 programming language itself. This talk is an 'experience report' that provides brief case studies illustrating the kinds of problems we worked on, how we approached them, and the extent (if any) to which the results proved useful. It also derives some lessons and suggestions for those undertaking future projects of this kind. Ada 95 is the first revision of the standard for the Ada programming language. The revision began in 1988, when the Ada Joint Programming Office first asked the Ada Board to recommend a plan for revising the Ada standard. The first step in the revision was to solicit criticisms of Ada 83. A set of requirements for the new language standard, based on those criticisms, was published in 1990. A small design team, the Mapping Revision Team (MRT), became exclusively responsible for revising the language standard to satisfy those requirements. The MRT, from Intermetrics, is led by S. Tucker Taft. The work of the MRT was regularly subject to independent review and criticism by a committee of distinguished Reviewers and by several advisory teams--for example, the two User/Implementor teams, each consisting of an industrial user (attempting to make significant use of the new language on a realistic application) and a compiler vendor (undertaking, experimentally, to modify its current implementation in order to provide the necessary new features). One novel decision established the Language Precision Team (LPT), which investigated language proposals from a mathematical point of view. The LPT applied formal mathematical analysis to help improve the design of Ada 95 (e.g., by clarifying the language proposals) and to help promote its acceptance (e.g., by identifying a

  20. PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD

    NASA Astrophysics Data System (ADS)

    Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao

    Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.

  1. Designing arrays for modern high-resolution methods

    SciTech Connect

    Dowla, F.U.

    1987-10-01

    A bearing estimation study of seismic wavefields propagating from a strongly heterogeneous media shows that with the high-resolution MUSIC algorithm the bias of the direction estimate can be reduced by adopting a smaller aperture sub-array. Further, on this sub-array, the bias of the MUSIC algorithm is less than those of the MLM and Bartlett methods. On the full array, the performance for the three different methods are comparable. Improvement in bearing estimation in MUSIC with a reduced aperture might be attributed to increased signal coherency in the array. For methods with less resolution, the improved signal coherency in the smaller array is possible being offset by severe loss of resolution and the presence of weak secondary sources. Building upon the characteristics of real seismic wavefields, a design language has been developed to generate, modify, and test other arrays. Eigenstructures of wavefields and arrays have been studied empirically by simulation of a variety of realistic signals. 6 refs., 5 figs.

  2. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  3. AmiRNA Designer - new method of artificial miRNA design.

    PubMed

    Mickiewicz, Agnieszka; Rybarczyk, Agnieszka; Sarzynska, Joanna; Figlerowicz, Marek; Blazewicz, Jacek

    2016-01-01

    MicroRNAs (miRNAs) are small non-coding RNAs that have been found in most of the eukaryotic organisms. They are involved in the regulation of gene expression at the post-transcriptional level in a sequence specific manner. MiRNAs are produced from their precursors by Dicer-dependent small RNA biogenesis pathway. Involvement of miRNAs in a wide range of biological processes makes them excellent candidates for studying gene function or for therapeutic applications. For this purpose, different RNA-based gene silencing techniques have been developed. Artificially transformed miRNAs (amiRNAs) targeting one or several genes of interest represent one of such techniques being a potential tool in functional genomics. Here, we present a new approach to amiRNA*design, implemented as AmiRNA Designer software. Our method is based on the thermodynamic analysis of the native miRNA/miRNA* and miRNA/target duplexes. In contrast to the available automated tools, our program allows the user to perform analysis of natural miRNAs for the organism of interest and to create customized constraints for the design stage. It also provides filtering of the amiRNA candidates for the potential off-targets. AmiRNA Designer is freely available at http://www.cs.put.poznan.pl/arybarczyk/AmiRNA/. PMID:26784022

  4. Information processing systems, reasoning modules, and reasoning system design methods

    DOEpatents

    Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.

    2016-08-23

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  5. Information processing systems, reasoning modules, and reasoning system design methods

    DOEpatents

    Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D

    2014-03-04

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  6. Information processing systems, reasoning modules, and reasoning system design methods

    DOEpatents

    Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.

    2015-08-18

    Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.

  7. Computational methods in metabolic engineering for strain design.

    PubMed

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms.

  8. Unique Method for Generating Design Earthquake Time Histories

    SciTech Connect

    R. E. Spears

    2008-07-01

    A method has been developed which takes a seed earthquake time history and modifies it to produce given design response spectra. It is a multi-step process with an initial scaling step and then multiple refinement steps. It is unique in the fact that both the acceleration and displacement response spectra are considered when performing the fit (which primarily improves the low frequency acceleration response spectrum accuracy). Additionally, no matrix inversion is needed. The features include encouraging the code acceleration, velocity, and displacement ratios and attempting to fit the pseudo velocity response spectrum. Also, “smoothing” is done to transition the modified time history to the seed time history at its start and end. This is done in the time history regions below a cumulative energy of 5% and above a cumulative energy of 95%. Finally, the modified acceleration, velocity, and displacement time histories are adjusted to start and end with an amplitude of zero (using Fourier transform techniques for integration).

  9. Computational methods in metabolic engineering for strain design.

    PubMed

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. PMID:25576846

  10. Development of impact design methods for ceramic gas turbine components

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1990-01-01

    Impact damage prediction methods are being developed to aid in the design of ceramic gas turbine engine components with improved impact resistance. Two impact damage modes were characterized: local, near the impact site, and structural, usually fast fracture away from the impact site. Local damage to Si3N4 impacted by Si3N4 spherical projectiles consists of ring and/or radial cracks around the impact point. In a mechanistic model being developed, impact damage is characterized as microcrack nucleation and propagation. The extent of damage is measured as volume fraction of microcracks. Model capability is demonstrated by simulating late impact tests. Structural failure is caused by tensile stress during impact exceeding material strength. The EPIC3 code was successfully used to predict blade structural failures in different size particle impacts on radial and axial blades.

  11. Design method of water jet pump towards high cavitation performances

    NASA Astrophysics Data System (ADS)

    Cao, L. L.; Che, B. X.; Hu, L. J.; Wu, D. Z.

    2016-05-01

    As one of the crucial components for power supply, the propulsion system is of great significance to the advance speed, noise performances, stabilities and other associated critical performances of underwater vehicles. Developing towards much higher advance speed, the underwater vehicles make more critical demands on the performances of the propulsion system. Basically, the increased advance speed requires the significantly raised rotation speed of the propulsion system, which would result in the deteriorated cavitation performances and consequently limit the thrust and efficiency of the whole system. Compared with the traditional propeller, the water jet pump offers more favourite cavitation, propulsion efficiency and other associated performances. The present research focuses on the cavitation performances of the waterjet pump blade profile in expectation of enlarging its advantages in high-speed vehicle propulsion. Based on the specifications of a certain underwater vehicle, the design method of the waterjet blade with high cavitation performances was investigated in terms of numerical simulation.

  12. Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels

    NASA Astrophysics Data System (ADS)

    Sturm, Ralf; Fischer, S.

    2015-12-01

    For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.

  13. Inquiry into the Practices of Expert Courseware Designers: A Pragmatic Method for the Design of Effective Instructional Systems

    ERIC Educational Resources Information Center

    Rowley, Kurt

    2005-01-01

    A multi-stage study of the practices of expert courseware designers was conducted with the final goal of identifying methods for assisting non-experts with the design of effective instructional systems. A total of 25 expert designers were involved in all stages of the inquiry. A model of the expert courseware design process was created, tested,…

  14. 77 FR 38856 - An Approach for Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... COMMISSION An Approach for Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific Changes...; extension of comment period. SUMMARY: On May 17, 2012 (77 FR 29391), the U.S. Nuclear Regulatory Commission... Approach for Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific Changes to...

  15. 77 FR 29391 - An Approach for Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-17

    ... COMMISSION An Approach for Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific Changes... Assessment in Risk- Informed Decisions on Plant-Specific Changes to the Licensing Basis,'' (proposed Revision 3 of Regulatory Guide 1.174); DG-1286, ``An Approach for Plant-Specific,...

  16. 78 FR 22349 - Guidance on the Treatment of Uncertainties Associated With PRA in Risk-Informed Decisionmaking

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-15

    ... COMMISSION Guidance on the Treatment of Uncertainties Associated With PRA in Risk-Informed Decisionmaking..., Revision 1, ``Guidance on the Treatment of Uncertainties Associated with PRA in Risk-Informed... INFORMATION: NUREG-1855, Revision 1, Guidance on the Treatment of Uncertainties Associated with PRA in...

  17. The Method of Complex Characteristics for Design of Transonic Compressors.

    NASA Astrophysics Data System (ADS)

    Bledsoe, Margaret Randolph

    We calculate shockless transonic flows past two -dimensional cascades of airfoils characterized by a prescribed speed distribution. The approach is to find solutions of the partial differential equation (c('2)-u('2)) (PHI)(,xx) - 2uv (PHI)(,xy) + (c('2)-v('2)) (PHI)(,yy) = 0 by the method of complex characteristics. Here (PHI) is the velocity potential, so (DEL)(PHI) = (u,v), and c is the local speed of sound. Our method consists in noting that the coefficients of the equation are analytic, so that we can use analytic continuation, conformal mapping, and a spectral method in the hodograph plane to determine the flow. After complex extension we obtain canonical equations for (PHI) and for the stream function (psi) as well as an explicit map from the hodograph plane to complex characteristic coordinates. In the subsonic case, a new coordinate system is defined in which the flow region corresponds to the interior of an ellipse. We construct special solutions of the flow equations in these coordinates by solving characteristic initial value problems in the ellipse with initial data defined by the complete system of Chebyshev polynomials. The condition (psi) = 0 on the boundary of the ellipse is used to determine the series representation of (PHI) and (psi). The map from the ellipse to the complex flow coordinates is found from data specifying the speed q as a function of the arc length s. The transonic problem for shockless flow becomes well posed after appropriate modifications of this procedure. The nonlinearity of the problem is handled by an iterative method that determines the boundary value problem in the ellipse and the map function in sequence. We have implemented this method as a computer code to design two-dimensional cascades of shockless compressor airfoils with gap-to-chord ratios as low as .5 and supersonic zones on both the upper and lower surfaces. The method may be extended to solve more general boundary value problems for second order partial

  18. Communicating Genetic Risk Information for Common Disorders in the Era of Genomic Medicine

    PubMed Central

    Lautenbach, Denise M.; Christensen, Kurt D.; Sparks, Jeffrey A.; Green, Robert C.

    2013-01-01

    Communicating genetic risk information in ways that maximize understanding and promote health is increasingly important given the rapidly expanding availability and capabilities of genomic technologies. A well-developed literature on risk communication in general provides guidance for best practices, including presentation of information in multiple formats, attention to framing effects, use of graphics, sensitivity to the way numbers are presented, parsimony of information, attentiveness to emotions, and interactivity as part of the communication process. Challenges to communicating genetic risk information include deciding how best to tailor it, streamlining the process, deciding what information to disclose, accepting that communications may have limited influence, and understanding the impact of context. Meeting these challenges has great potential for empowering individuals to adopt healthier lifestyles and improve public health, but will require multidisciplinary approaches and collaboration. PMID:24003856

  19. Risk information provided to prospective oocyte donors in a preliminary phone call.

    PubMed

    Gurmankin, A D

    2001-01-01

    In order to accommodate for the present shortage of oocyte donors, oocyte-donation programs place ads in college newspapers and provide large monetary compensation to encourage participation. Large compensation acts as a strong incentive for young women to undergo the potentially risky procedure of donation. In this enticing situation, it is particularly important for programs to fully inform prospective donors of the risks of the procedure so that they can accurately weigh the costs and benefits of donating. However, because oocyte-donor programs must alleviate the shortage of donors if they wish to maintain a financially viable business, there is reason to fear that they may minimize or misrepresent risks when recruiting egg donors. In this pilot study, the risk information provided by programs (n=19) to prospective oocyte donors in a preliminary phone call inquiry was investigated. The majority of the programs provided incomplete and/or inaccurate risk information. Policy changes are recommended to reduce the potential for undue influence and to standardize and regulate the risk information provided to prospective egg donors. PMID:11954633

  20. Progress toward regulatory acceptance of risk-informed inspection programs for nuclear power plants

    NASA Astrophysics Data System (ADS)

    Hedden, Owen F.; Cowfer, C. David

    1996-11-01

    This paper will describe work within the American Society of Mechanical Engineers committee responsible for rules for inservice inspection of nuclear power plants. Work is progressing with the objective of producing proposals for risk-informed inspection programs that will be incorporated by the US Nuclear Regulatory Commission into the Federal Regulations Governing the construction and inservice inspection of al domestic commercial power plants. The paper will describe in detail the two primary proposals now under development and review. Both are directed toward enhancing safety while reducing the expense of periodic examination of piping welds. The first proposal provides a sound technical basis for reducing the number of Class 1 piping weld examinations as much as 60 percent while improving or maintaining equivalent safety. This is accomplished by using risk-informed techniques to re-establish the most important areas to examine. The second is a broader approach addressing all piping systems considered to be important under risk-informed assessment techniques. Both proposals are based on recent insights into risk analysis techniques developed within the pressure vessel industry, and both require evaluation of theoretical analysis and inputs of practical experience related to a wide variety of detrimental conditions. These proposals are being supported by pilot programs in a number of operating nuclear power plants. The authors will also attempt to explain the institutional constraints inherent in the process of obtaining regulatory recognition of proposals developed cooperatively by industry and the regulatory agency.

  1. A Universal Design Method for Reflecting Physical Characteristics Variability: Case Study of a Bicycle Frame.

    PubMed

    Shimada, Masato; Suzuki, Wataru; Yamada, Shuho; Inoue, Masato

    2016-01-01

    To achieve a Universal Design, designers must consider diverse users' physical and functional requirements for their products. However, satisfying these requirements and obtaining the information which is necessary for designing a universal product is very difficult. Therefore, we propose a new design method based on the concept of set-based design to solve these issues. This paper discusses the suitability of proposed design method by applying bicycle frame design problem. PMID:27534334

  2. A Universal Design Method for Reflecting Physical Characteristics Variability: Case Study of a Bicycle Frame.

    PubMed

    Shimada, Masato; Suzuki, Wataru; Yamada, Shuho; Inoue, Masato

    2016-01-01

    To achieve a Universal Design, designers must consider diverse users' physical and functional requirements for their products. However, satisfying these requirements and obtaining the information which is necessary for designing a universal product is very difficult. Therefore, we propose a new design method based on the concept of set-based design to solve these issues. This paper discusses the suitability of proposed design method by applying bicycle frame design problem.

  3. Design optimization methods for genomic DNA tiling arrays.

    PubMed

    Bertone, Paul; Trifonov, Valery; Rozowsky, Joel S; Schubert, Falk; Emanuelsson, Olof; Karro, John; Kao, Ming-Yang; Snyder, Michael; Gerstein, Mark

    2006-02-01

    A recent development in microarray research entails the unbiased coverage, or tiling, of genomic DNA for the large-scale identification of transcribed sequences and regulatory elements. A central issue in designing tiling arrays is that of arriving at a single-copy tile path, as significant sequence cross-hybridization can result from the presence of non-unique probes on the array. Due to the fragmentation of genomic DNA caused by the widespread distribution of repetitive elements, the problem of obtaining adequate sequence coverage increases with the sizes of subsequence tiles that are to be included in the design. This becomes increasingly problematic when considering complex eukaryotic genomes that contain many thousands of interspersed repeats. The general problem of sequence tiling can be framed as finding an optimal partitioning of non-repetitive subsequences over a prescribed range of tile sizes, on a DNA sequence comprising repetitive and non-repetitive regions. Exact solutions to the tiling problem become computationally infeasible when applied to large genomes, but successive optimizations are developed that allow their practical implementation. These include an efficient method for determining the degree of similarity of many oligonucleotide sequences over large genomes, and two algorithms for finding an optimal tile path composed of longer sequence tiles. The first algorithm, a dynamic programming approach, finds an optimal tiling in linear time and space; the second applies a heuristic search to reduce the space complexity to a constant requirement. A Web resource has also been developed, accessible at http://tiling.gersteinlab.org, to generate optimal tile paths from user-provided DNA sequences.

  4. The Convergence Insufficiency Treatment Trial: Design, Methods, and Baseline Data

    PubMed Central

    2009-01-01

    Objective This report describes the design and methodology of the Convergence Insufficiency Treatment Trial (CITT), the first large-scale, placebo-controlled, randomized clinical trial evaluating treatments for convergence insufficiency (CI) in children. We also report the clinical and demographic characteristics of patients. Methods We prospectively randomized children 9 to 17 years of age to one of four treatment groups: 1) home-based pencil push-ups, 2) home-based computer vergence/accommodative therapy and pencil push-ups, 3) office-based vergence/accommodative therapy with home reinforcement, 4) office-based placebo therapy. Outcome data on the Convergence Insufficiency Symptom Survey (CISS) score (primary outcome), near point of convergence (NPC), and positive fusional vergence were collected after 12 weeks of active treatment and again at 6 and 12 months post-treatment. Results The CITT enrolled 221 children with symptomatic CI with a mean age of 12.0 years (SD = +2.3). The clinical profile of the cohort at baseline was 9Δ exophoria at near (+/− 4.4) and 2Δ exophoria (+/−2.8) at distance, CISS score = 30 (+/−9.0), NPC = 14 cm (+/− 7.5), and near positive fusional vergence break = 13 Δ (+/− 4.6). There were no statistically significant nor clinically relevant differences between treatment groups with respect to baseline characteristics (p > 0.05). Conclusion Hallmark features of the study design include formal definitions of conditions and outcomes, standardized diagnostic and treatment protocols, a placebo treatment arm, masked outcome examinations, and the CISS score outcome measure. The baseline data reported herein define the clinical profile of those enrolled into the CITT. PMID:18300086

  5. Visual Narrative Research Methods as Performance in Industrial Design Education

    ERIC Educational Resources Information Center

    Campbell, Laurel H.; McDonagh, Deana

    2009-01-01

    This article discusses teaching empathic research methodology as performance. The authors describe their collaboration in an activity to help undergraduate industrial design students learn empathy for others when designing products for use by diverse or underrepresented people. The authors propose that an industrial design curriculum would benefit…

  6. The Chinese American Eye Study: Design and Methods

    PubMed Central

    Varma, Rohit; Hsu, Chunyi; Wang, Dandan; Torres, Mina; Azen, Stanley P.

    2016-01-01

    Purpose To summarize the study design, operational strategies and procedures of the Chinese American Eye Study (CHES), a population-based assessment of the prevalence of visual impairment, ocular disease, and visual functioning in Chinese Americans. Methods This population-based, cross-sectional study, included 4,570 Chinese, 50 years and older, residing in the city of Monterey Park, California. Each eligible participant completed a detailed interview and eye examination. The interview included an assessment of demographic, behavioral, and ocular risk factors and health-related and vision-related quality of life. The eye examination included measurements of visual acuity, intraocular pressure, visual fields, fundus and optic disc photography, a detailed anterior and posterior segment examination, and measurements of blood pressure, glycosylated hemoglobin levels, and blood glucose levels. Results The objectives of the CHES are to obtain prevalence estimates of visual impairment, refractive error, diabetic retinopathy, open-angle and angle-closure glaucoma, lens opacities, and age-related macular degeneration in Chinese-Americans. In addition, outcomes include effect estimates for risk factors associated with eye diseases. Lastly, CHES will investigate the genetic determinates of myopia and glaucoma. Conclusion The CHES will provide information about the prevalence and risk factors of ocular diseases in one of the fastest growing minority groups in the United States. PMID:24044409

  7. Design and methods of the national Vietnam veterans longitudinal study.

    PubMed

    Schlenger, William E; Corry, Nida H; Kulka, Richard A; Williams, Christianna S; Henn-Haase, Clare; Marmar, Charles R

    2015-09-01

    The National Vietnam Veterans Longitudinal Study (NVVLS) is the second assessment of a representative cohort of US veterans who served during the Vietnam War era, either in Vietnam or elsewhere. The cohort was initially surveyed in the National Vietnam Veterans Readjustment Study (NVVRS) from 1984 to 1988 to assess the prevalence, incidence, and effects of post-traumatic stress disorder (PTSD) and other post-war problems. The NVVLS sought to re-interview the cohort to assess the long-term course of PTSD. NVVLS data collection began July 3, 2012 and ended May 17, 2013, comprising three components: a mailed health questionnaire, a telephone health survey interview, and, for a probability sample of theater Veterans, a clinical diagnostic telephone interview administered by licensed psychologists. Excluding decedents, 78.8% completed the questionnaire and/or telephone survey, and 55.0% of selected living veterans participated in the clinical interview. This report provides a description of the NVVLS design and methods. Together, the NVVRS and NVVLS constitute a nationally representative longitudinal study of Vietnam veterans, and extend the NVVRS as a critical resource for scientific and policy analyses for Vietnam veterans, with policy relevance for Iraq and Afghanistan veterans.

  8. Design and methods of the national Vietnam veterans longitudinal study.

    PubMed

    Schlenger, William E; Corry, Nida H; Kulka, Richard A; Williams, Christianna S; Henn-Haase, Clare; Marmar, Charles R

    2015-09-01

    The National Vietnam Veterans Longitudinal Study (NVVLS) is the second assessment of a representative cohort of US veterans who served during the Vietnam War era, either in Vietnam or elsewhere. The cohort was initially surveyed in the National Vietnam Veterans Readjustment Study (NVVRS) from 1984 to 1988 to assess the prevalence, incidence, and effects of post-traumatic stress disorder (PTSD) and other post-war problems. The NVVLS sought to re-interview the cohort to assess the long-term course of PTSD. NVVLS data collection began July 3, 2012 and ended May 17, 2013, comprising three components: a mailed health questionnaire, a telephone health survey interview, and, for a probability sample of theater Veterans, a clinical diagnostic telephone interview administered by licensed psychologists. Excluding decedents, 78.8% completed the questionnaire and/or telephone survey, and 55.0% of selected living veterans participated in the clinical interview. This report provides a description of the NVVLS design and methods. Together, the NVVRS and NVVLS constitute a nationally representative longitudinal study of Vietnam veterans, and extend the NVVRS as a critical resource for scientific and policy analyses for Vietnam veterans, with policy relevance for Iraq and Afghanistan veterans. PMID:26096554

  9. A decision-based perspective for the design of methods for systems design

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Muster, Douglas; Shupe, Jon A.; Allen, Janet K.

    1989-01-01

    Organization of material, a definition of decision based design, a hierarchy of decision based design, the decision support problem technique, a conceptual model design that can be manufactured and maintained, meta-design, computer-based design, action learning, and the characteristics of decisions are among the topics covered.

  10. Applications of numerical optimization methods to helicopter design problems: A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  11. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with Computer Technology in Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1998-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.

  12. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  13. Stillbirth Collaborative Research Network: design, methods and recruitment experience.

    PubMed

    Parker, Corette B; Hogue, Carol J R; Koch, Matthew A; Willinger, Marian; Reddy, Uma M; Thorsten, Vanessa R; Dudley, Donald J; Silver, Robert M; Coustan, Donald; Saade, George R; Conway, Deborah; Varner, Michael W; Stoll, Barbara; Pinar, Halit; Bukowski, Radek; Carpenter, Marshall; Goldenberg, Robert

    2011-09-01

    The Stillbirth Collaborative Research Network (SCRN) has conducted a multisite, population-based, case-control study, with prospective enrollment of stillbirths and livebirths at the time of delivery. This paper describes the general design, methods and recruitment experience. The SCRN attempted to enroll all stillbirths and a representative sample of livebirths occurring to residents of pre-defined geographical catchment areas delivering at 59 hospitals associated with five clinical sites. Livebirths <32 weeks gestation and women of African descent were oversampled. The recruitment hospitals were chosen to ensure access to at least 90% of all stillbirths and livebirths to residents of the catchment areas. Participants underwent a standardised protocol including maternal interview, medical record abstraction, placental pathology, biospecimen testing and, in stillbirths, post-mortem examination. Recruitment began in March 2006 and was completed in September 2008 with 663 women with a stillbirth and 1932 women with a livebirth enrolled, representing 69% and 63%, respectively, of the women identified. Additional surveillance for stillbirths continued until June 2009 and a follow-up of the case-control study participants was completed in December 2009. Among consenting women, there were high consent rates for the various study components. For the women with stillbirths, 95% agreed to a maternal interview, chart abstraction and a placental pathological examination; 91% of the women with a livebirth agreed to all of these components. Additionally, 84% of the women with stillbirths agreed to a fetal post-mortem examination. This comprehensive study is poised to systematically study a wide range of potential causes of, and risk factors for, stillbirths and to better understand the scope and incidence of the problem.

  14. Stillbirth Collaborative Research Network: Design, Methods and Recruitment Experience

    PubMed Central

    Parker, Corette B.; Hogue, Carol J. Rowland; Koch, Matthew A.; Willinger, Marian; Reddy, Uma; Thorsten, Vanessa R.; Dudley, Donald J.; Silver, Robert M.; Coustan, Donald; Saade, George R.; Conway, Deborah; Varner, Michael W.; Stoll, Barbara; Pinar, Halit; Bukowski, Radek; Carpenter, Marshall; Goldenberg, Robert

    2013-01-01

    SUMMARY The Stillbirth Collaborative Research Network (SCRN) has conducted a multisite, population-based, case-control study, with prospective enrollment of stillbirths and live births at the time of delivery. This paper describes the general design, methods, and recruitment experience. The SCRN attempted to enroll all stillbirths and a representative sample of live births occurring to residents of pre-defined geographic catchment areas delivering at 59 hospitals associated with five clinical sites. Live births <32 weeks gestation and women of African descent were oversampled. The recruitment hospitals were chosen to ensure access to at least 90% of all stillbirths and live births to residents of the catchment areas. Participants underwent a standardized protocol including maternal interview, medical record abstraction, placental pathology, biospecimen testing, and, in stillbirths, postmortem examination. Recruitment began in March 2006 and was completed in September 2008 with 663 women with a stillbirth and 1932 women with a live birth enrolled, representing 69% and 63%, respectively, of the women identified. Additional surveillance for stillbirth continued through June 2009 and a follow-up of the case-control study participants was completed in December 2009. Among consenting women, there were high consent rates for the various study components. For the women with stillbirth, 95% agreed to maternal interview, chart abstraction, and placental pathologic examination; 91% of the women with live birth agreed to all of these components. Additionally, 84% of the women with stillbirth agreed to a fetal postmortem examination. This comprehensive study is poised to systematically study a wide range of potential causes of, and risk factors for, stillbirth and to better understand the scope and incidence of the problem. PMID:21819424

  15. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  16. Developing Baby Bag Design by Using Kansei Engineering Method

    NASA Astrophysics Data System (ADS)

    Janari, D.; Rakhmawati, A.

    2016-01-01

    Consumer's preferences and market demand are essential factors for product's success. Thus, in achieving its success, a product should have design that could fulfill consumer's expectation. Purpose of this research is accomplishing baby bag product as stipulated by Kansei. The results that represent Kanseiwords are; neat, unique, comfortable, safe, modern, gentle, elegant, antique, attractive, simple, spacious, creative, colorful, durable, stylish, smooth and strong. Identification value on significance of correlation for durable attribute is 0,000 < 0,005, which means significant to baby's bag. While the value of coefficient regression is 0,812 < 0,005, which means that durable attribute insignificant to baby's bag.The result of the baby's bag final design selectionbased on the questionnaire 3 is resulting the combination of all design. Space for clothes, diaper's space, shoulder grip, side grip, bottle's heater pocket and bottle's pocket are derived from design 1. Top grip, space for clothes, shoulder grip, and side grip are derived from design 2.Others design that were taken are, spaces for clothes from design 3, diaper's space and clothes’ space from design 4.

  17. Teaching Improvement Model Designed with DEA Method and Management Matrix

    ERIC Educational Resources Information Center

    Montoneri, Bernard

    2014-01-01

    This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…

  18. METHODS FOR INTEGRATING ENVIRONMENTAL CONSIDERATIONS INTO CHEMICAL PROCESS DESIGN DECISIONS

    EPA Science Inventory

    The objective of this cooperative agreement was to postulate a means by which an engineer could routinely include environmental considerations in day-to-day conceptual design problems; a means that could easily integrate with existing design processes, and thus avoid massive retr...

  19. Development of Combinatorial Methods for Alloy Design and Optimization

    SciTech Connect

    Pharr, George M.; George, Easo P.; Santella, Michael L

    2005-07-01

    The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very powerful technique for

  20. Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.

    PubMed

    Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo

    2016-07-01

    During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process).

  1. Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.

    PubMed

    Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo

    2016-07-01

    During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). PMID:26995039

  2. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  3. The cryogenic balance design and balance calibration methods

    NASA Astrophysics Data System (ADS)

    Ewald, B.; Polanski, L.; Graewe, E.

    1992-07-01

    The current status of a program aimed at the development of a cryogenic balance for the European Transonic Wind Tunnel is reviewed. In particular, attention is given to the cryogenic balance design philosophy, mechanical balance design, reliability and accuracy, cryogenic balance calibration concept, and the concept of an automatic calibration machine. It is shown that the use of the automatic calibration machine will improve the accuracy of calibration while reducing the man power and time required for balance calibration.

  4. Advanced transonic fan design procedure based on a Navier-Stokes method

    NASA Astrophysics Data System (ADS)

    Rhie, C. M.; Zacharias, R. M.; Hobbs, D. E.; Sarathy, K. P.; Biederman, B. P.; Lejambre, C. R.; Spear, D. A.

    1994-04-01

    A fan performance analysis method based upon three-dimensional steady Navier-Stokes equations is presented in this paper. Its accuracy is established through extensive code validation effort. Validation data comparisons ranging from a two-dimensional compressor cascade to three-dimensional fans are shown in this paper to highlight the accuracy and reliability of the code. The overall fan design procedure using this code is then presented. Typical results of this design process are shown for a current engine fan design. This new design method introduces a major improvement over the conventional design methods based on inviscid flow and boundary layer concepts. Using the Navier-Stokes design method, fan designers can confidently refine their designs prior to rig testing. This results in reduced rig testing and cost savings as the bulk of the iteration between design and experimental verification is transferred to an iteration between design and computational verification.

  5. Aircraft design for mission performance using nonlinear multiobjective optimization methods

    NASA Technical Reports Server (NTRS)

    Dovi, Augustine R.; Wrenn, Gregory A.

    1990-01-01

    A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.

  6. A Preliminary Rubric Design to Evaluate Mixed Methods Research

    ERIC Educational Resources Information Center

    Burrows, Timothy J.

    2013-01-01

    With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…

  7. FIRE SAFETY IN NUCLEAR POWER PLANTS: A RISK-INFORMED AND PERFORMANCE-BASED APPROACH

    SciTech Connect

    AZARM,M.A.; TRAVIS,R.J.

    1999-11-14

    The consideration of risk in regulatory decision-making has long been a part of NRC's policy and practice. Initially, these considerations were qualitative and were based on risk insights. The early regulations relied on good practices, past insights, and accepted standards. As a result, most NRC regulations were prescriptive and were applied uniformly to all areas within the regulatory scope. Risk technology is changing regulations by prioritizing the areas within regulatory scope based on risk, thereby focusing on the risk-important areas. Performance technology, on the other hand, is changing the regulations by allowing requirements to be adjusted based on the specific performance expected and manifested, rather than a prior prescriptive requirement. Consistent with the objectives of risk-informed and performance-based regulatory requirements, BNL evaluated the feasibility of applying risk- and performance-technologies to modifying NRC's current regulations on fire protection for nuclear power plants. This feasibility study entailed several case studies (trial applications). This paper describes the results of two of them. Besides the case studies, the paper discusses an overall evaluation of methodologies for fire-risk analysis to support the risk-informed regulation. It identifies some current shortcomings and proposes some near-term solutions.

  8. Developing an Approach to Prioritize River Restoration using Data Extracted from Flood Risk Information System Databases.

    NASA Astrophysics Data System (ADS)

    Vimal, S.; Tarboton, D. G.; Band, L. E.; Duncan, J. M.; Lovette, J. P.; Corzo, G.; Miles, B.

    2015-12-01

    Prioritizing river restoration requires information on river geometry. In many states in the US detailed river geometry has been collected for floodplain mapping and is available in Flood Risk Information Systems (FRIS). In particular, North Carolina has, for its 100 Counties, developed a database of numerous HEC-RAS models which are available through its Flood Risk Information System (FRIS). These models that include over 260 variables were developed and updated by numerous contractors. They contain detailed surveyed or LiDAR derived cross-sections and modeled flood extents for different extreme event return periods. In this work, over 4700 HEC-RAS models' data was integrated and upscaled to utilize detailed cross-section information and 100-year modelled flood extent information to enable river restoration prioritization for the entire state of North Carolina. We developed procedures to extract geomorphic properties such as entrenchment ratio, incision ratio, etc. from these models. Entrenchment ratio quantifies the vertical containment of rivers and thereby their vulnerability to flooding and incision ratio quantifies the depth per unit width. A map of entrenchment ratio for the whole state was derived by linking these model results to a geodatabase. A ranking of highly entrenched counties enabling prioritization for flood allowance and mitigation was obtained. The results were shared through HydroShare and web maps developed for their visualization using Google Maps Engine API.

  9. Statistical Methods for Rapid Aerothermal Analysis and Design Technology

    NASA Technical Reports Server (NTRS)

    Morgan, Carolyn; DePriest, Douglas; Thompson, Richard (Technical Monitor)

    2002-01-01

    The cost and safety goals for NASA's next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to establish statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The research work was focused on establishing the suitable mathematical/statistical models for these purposes. It is anticipated that the resulting models can be incorporated into a software tool to provide rapid, variable-fidelity, aerothermal environments to predict heating along an arbitrary trajectory. This work will support development of an integrated design tool to perform automated thermal protection system (TPS) sizing and material selection.

  10. A method for designing robust multivariable feedback systems

    NASA Technical Reports Server (NTRS)

    Milich, David Albert; Athans, Michael; Valavani, Lena; Stein, Gunter

    1988-01-01

    A new methodology is developed for the synthesis of linear, time-invariant (LTI) controllers for multivariable LTI systems. The aim is to achieve stability and performance robustness of the feedback system in the presence of multiple unstructured uncertainty blocks; i.e., to satisfy a frequency-domain inequality in terms of the structured singular value. The design technique is referred to as the Causality Recovery Methodology (CRM). Starting with an initial (nominally) stabilizing compensator, the CRM produces a closed-loop system whose performance-robustness is at least as good as, and hopefully superior to, that of the original design. The robustness improvement is obtained by solving an infinite-dimensional, convex optimization program. A finite-dimensional implementation of the CRM was developed, and it was applied to a multivariate design example.

  11. A design method for an intuitive web site

    SciTech Connect

    Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.

    1999-11-03

    The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.

  12. Advanced 3D inverse method for designing turbomachine blades

    SciTech Connect

    Dang, T.

    1995-10-01

    To meet the goal of 60% plant-cycle efficiency or better set in the ATS Program for baseload utility scale power generation, several critical technologies need to be developed. One such need is the improvement of component efficiencies. This work addresses the issue of improving the performance of turbo-machine components in gas turbines through the development of an advanced three-dimensional and viscous blade design system. This technology is needed to replace some elements in current design systems that are based on outdated technology.

  13. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  14. Convergence of controllers designed using state space methods

    NASA Technical Reports Server (NTRS)

    Morris, K. A.

    1991-01-01

    The convergence of finite dimensional controllers for infinite dimensional systems designed using approximations is examined. Stable coprime factorization theory is used to show that under the standard assumptions of uniform stabilizability/detectability, the controllers stabilize the original system for large enough model order. The controllers converge uniformly to an infinite dimensional controller, as does the closed loop response.

  15. Improved Methods for Classification, Prediction and Design of Antimicrobial Peptides

    PubMed Central

    Wang, Guangshun

    2015-01-01

    Peptides with diverse amino acid sequences, structures and functions are essential players in biological systems. The construction of well-annotated databases not only facilitates effective information management, search and mining, but also lays the foundation for developing and testing new peptide algorithms and machines. The antimicrobial peptide database (APD) is an original construction in terms of both database design and peptide entries. The host defense antimicrobial peptides (AMPs) registered in the APD cover the five kingdoms (bacteria, protists, fungi, plants, and animals) or three domains of life (bacteria, archaea, and eukaryota). This comprehensive database (http://aps.unmc.edu/AP) provides useful information on peptide discovery timeline, nomenclature, classification, glossary, calculation tools, and statistics. The APD enables effective search, prediction, and design of peptides with antibacterial, antiviral, antifungal, antiparasitic, insecticidal, spermicidal, anticancer activities, chemotactic, immune modulation, or anti-oxidative properties. A universal classification scheme is proposed herein to unify innate immunity peptides from a variety of biological sources. As an improvement, the upgraded APD makes predictions based on the database-defined parameter space and provides a list of the sequences most similar to natural AMPs. In addition, the powerful pipeline design of the database search engine laid a solid basis for designing novel antimicrobials to combat resistant superbugs, viruses, fungi or parasites. This comprehensive AMP database is a useful tool for both research and education. PMID:25555720

  16. Designing green corrosion inhibitors using chemical computation methods

    SciTech Connect

    Singhl, W.P.; Lin, G.; Bockris, J.O.M.; Kang, Y.

    1998-12-31

    Green corrosion inhibitors have been designed by understanding the relationships between the structure of organic compounds and toxicity as well as corrosion inhibition efficiency. The estimation of aquatic toxicity as well as corrosion inhibition efficiency are made using QSAR techniques. The predicted structures with reduced toxicity and improved corrosion inhibition efficiency are then tested experimentally for these properties, thus leading to green inhibitors.

  17. A Prospective Method to Guide Small Molecule Drug Design

    ERIC Educational Resources Information Center

    Johnson, Alan T.

    2015-01-01

    At present, small molecule drug design follows a retrospective path when considering what analogs are to be made around a current hit or lead molecule with the focus often on identifying a compound with higher intrinsic potency. What this approach overlooks is the simultaneous need to also improve the physicochemical (PC) and pharmacokinetic (PK)…

  18. Library Design Analysis Using Post-Occupancy Evaluation Methods.

    ERIC Educational Resources Information Center

    James, Dennis C.; Stewart, Sharon L.

    1995-01-01

    Presents findings of a user-based study of the interior of Rodger's Science and Engineering Library at the University of Alabama. Compared facility evaluations from faculty, library staff, and graduate and undergraduate students. Features evaluated include: acoustics, aesthetics, book stacks, design, finishes/materials, furniture, lighting,…

  19. Improved methods for classification, prediction, and design of antimicrobial peptides.

    PubMed

    Wang, Guangshun

    2015-01-01

    Peptides with diverse amino acid sequences, structures, and functions are essential players in biological systems. The construction of well-annotated databases not only facilitates effective information management, search, and mining but also lays the foundation for developing and testing new peptide algorithms and machines. The antimicrobial peptide database (APD) is an original construction in terms of both database design and peptide entries. The host defense antimicrobial peptides (AMPs) registered in the APD cover the five kingdoms (bacteria, protists, fungi, plants, and animals) or three domains of life (bacteria, archaea, and eukaryota). This comprehensive database ( http://aps.unmc.edu/AP ) provides useful information on peptide discovery timeline, nomenclature, classification, glossary, calculation tools, and statistics. The APD enables effective search, prediction, and design of peptides with antibacterial, antiviral, antifungal, antiparasitic, insecticidal, spermicidal, anticancer activities, chemotactic, immune modulation, or antioxidative properties. A universal classification scheme is proposed herein to unify innate immunity peptides from a variety of biological sources. As an improvement, the upgraded APD makes predictions based on the database-defined parameter space and provides a list of the sequences most similar to natural AMPs. In addition, the powerful pipeline design of the database search engine laid a solid basis for designing novel antimicrobials to combat resistant superbugs, viruses, fungi, or parasites. This comprehensive AMP database is a useful tool for both research and education.

  20. A Pareto-optimal refinement method for protein design scaffolds.

    PubMed

    Nivón, Lucas Gregorio; Moretti, Rocco; Baker, David

    2013-01-01

    Computational design of protein function involves a search for amino acids with the lowest energy subject to a set of constraints specifying function. In many cases a set of natural protein backbone structures, or "scaffolds", are searched to find regions where functional sites (an enzyme active site, ligand binding pocket, protein-protein interaction region, etc.) can be placed, and the identities of the surrounding amino acids are optimized to satisfy functional constraints. Input native protein structures almost invariably have regions that score very poorly with the design force field, and any design based on these unmodified structures may result in mutations away from the native sequence solely as a result of the energetic strain. Because the input structure is already a stable protein, it is desirable to keep the total number of mutations to a minimum and to avoid mutations resulting from poorly-scoring input structures. Here we describe a protocol using cycles of minimization with combined backbone/sidechain restraints that is Pareto-optimal with respect to RMSD to the native structure and energetic strain reduction. The protocol should be broadly useful in the preparation of scaffold libraries for functional site design.

  1. Study of design and analysis methods for transonic flow

    NASA Technical Reports Server (NTRS)

    Murman, E. M.

    1977-01-01

    An airfoil design program and a boundary layer analysis were developed. Boundary conditions were derived for ventilated transonic wind tunnels and performing transonic windtunnel wall calculations. A computational procedure for rotational transonic flow in engine inlet throats was formulated. Results and conclusions are summarized.

  2. When the Details Matter – Sensitivities in PRA Calculations That Could Affect Risk-Informed Decision-Making

    SciTech Connect

    Dana L. Kelly; Nathan O. Siu

    2010-06-01

    As the U.S. Nuclear Regulatory Commission (NRC) continues its efforts to increase its use of risk information in decision making, the detailed, quantitative results of probabilistic risk assessment (PRA) calculations are coming under increased scrutiny. Where once analysts and users were not overly concerned with figure of merit variations that were less than an order of magnitude, now factors of two or even less can spark heated debate regarding modeling approaches and assumptions. The philosophical and policy-related aspects of this situation are well-recognized by the PRA community. On the other hand, the technical implications for PRA methods and modeling have not been as widely discussed. This paper illustrates the potential numerical effects of choices as to the details of models and methods for parameter estimation with three examples: 1) the selection of the time period data for parameter estimation, and issues related to component boundary and failure mode definitions; 2) the selection of alternative diffuse prior distributions, including the constrained noninformative prior distribution, in Bayesian parameter estimation; and 3) the impact of uncertainty in calculations for recovery of offsite power.

  3. Overview of control design methods for smart structural system

    NASA Astrophysics Data System (ADS)

    Rao, Vittal S.; Sana, Sridhar

    2001-08-01

    Smart structures are a result of effective integration of control system design and signal processing with the structural systems to maximally utilize the new advances in materials for structures, actuation and sensing to obtain the best performance for the application at hand. The research in smart structures is constantly driving towards attaining self adaptive and diagnostic capabilities that biological systems possess. This has been manifested in the number of successful applications in many areas of engineering such as aerospace, civil and automotive systems. Instrumental in the development of such systems are smart materials such as piezo-electric, shape memory alloys, electrostrictive, magnetostrictive and fiber-optic materials and various composite materials for use as actuators, sensors and structural members. The need for development of control systems that maximally utilize the smart actuators and sensing materials to design highly distributed and highly adaptable controllers has spurred research in the area of smart structural modeling, identification, actuator/sensor design and placement, control systems design such as adaptive and robust controllers with new tools such a neural networks, fuzzy logic, genetic algorithms, linear matrix inequalities and electronics for controller implementation such as analog electronics, micro controllers, digital signal processors (DSPs) and application specific integrated circuits (ASICs) such field programmable gate arrays (FPGAs) and Multichip modules (MCMs) etc. In this paper, we give a brief overview of the state of control in smart structures. Different aspects of the development of smart structures such as applications, technology and theoretical advances especially in the area of control systems design and implementation will be covered.

  4. Optimal reliability design method for remote solar systems

    NASA Astrophysics Data System (ADS)

    Suwapaet, Nuchida

    A unique optimal reliability design algorithm is developed for remote communication systems. The algorithm deals with either minimizing an unavailability of the system within a fixed cost or minimizing the cost of the system with an unavailability constraint. The unavailability of the system is a function of three possible failure occurrences: individual component breakdown, solar energy deficiency (loss of load probability), and satellite/radio transmission loss. The three mathematical models of component failure, solar power failure, transmission failure are combined and formulated as a nonlinear programming optimization problem with binary decision variables, such as number and type (or size) of photovoltaic modules, batteries, radios, antennas, and controllers. Three possible failures are identified and integrated in computer algorithm to generate the parameters for the optimization algorithm. The optimization algorithm is implemented with a branch-and-bound technique solution in MS Excel Solver. The algorithm is applied to a case study design for an actual system that will be set up in remote mountainous areas of Peru. The automated algorithm is verified with independent calculations. The optimal results from minimizing the unavailability of the system with the cost constraint case and minimizing the total cost of the system with the unavailability constraint case are consistent with each other. The tradeoff feature in the algorithm allows designers to observe results of 'what-if' scenarios of relaxing constraint bounds, thus obtaining the most benefit from the optimization process. An example of this approach applied to an existing communication system in the Andes shows dramatic improvement in reliability for little increase in cost. The algorithm is a real design tool, unlike other existing simulation design tools. The algorithm should be useful for other stochastic systems where component reliability, random supply and demand, and communication are

  5. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  6. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  7. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  8. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  9. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  10. Applications of Genetic Methods to NASA Design and Operations Problems

    NASA Technical Reports Server (NTRS)

    Laird, Philip D.

    1996-01-01

    We review four recent NASA-funded applications in which evolutionary/genetic methods are important. In the process we survey: the kinds of problems being solved today with these methods; techniques and tools used; problems encountered; and areas where research is needed. The presentation slides are annotated briefly at the top of each page.

  11. Design and ergonomics. Methods for integrating ergonomics at hand tool design stage.

    PubMed

    Marsot, Jacques; Claudon, Laurent

    2004-01-01

    As a marked increase in the number of musculoskeletal disorders was noted in many industrialized countries and more specifically in companies that require the use of hand tools, the French National Research and Safety Institute (INRS) launched in 1999 a research project on the topic of integrating ergonomics into hand tool design, and more particularly to a design of a boning knife. After a brief recall of the difficulties of integrating ergonomics at the design stage, the present paper shows how 3 design methodological tools--Functional Analysis, Quality Function Deployment and TRIZ--have been applied to the design of a boning knife. Implementation of these tools enabled us to demonstrate the extent to which they are capable of responding to the difficulties of integrating ergonomics into product design. PMID:15028190

  12. How to Combine Objectives and Methods of Evaluation in Iterative ILE Design: Lessons Learned from Designing Ambre-Add

    ERIC Educational Resources Information Center

    Nogry, S.; Jean-Daubias, S.; Guin, N.

    2012-01-01

    This article deals with evaluating an interactive learning environment (ILE) during the iterative-design process. Various aspects of the system must be assessed and a number of evaluation methods are available. In designing the ILE Ambre-add, several techniques were combined to test and refine the system. In particular, we point out the merits of…

  13. A hybrid nonlinear programming method for design optimization

    NASA Technical Reports Server (NTRS)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  14. Design Method of Fault Detector for Injection Unit

    NASA Astrophysics Data System (ADS)

    Ochi, Kiyoshi; Saeki, Masami

    An injection unit is considered as a speed control system utilizing a reaction-force sensor. Our purpose is to design a fault detector that detects and isolates actuator and sensor faults under the condition that the system is disturbed by a reaction force. First described is the fault detector's general structure. In this system, a disturbance observer that estimates the reaction force is designed for the speed control system in order to obtain the residual signals, and then post-filters that separate the specific frequency elements from the residual signals are applied in order to generate the decision signals. Next, we describe a fault detector designed specifically for a model of the injection unit. It is shown that the disturbance imposed on the decision variables can be made significantly small by appropriate adjustments to the observer bandwidth, and that most of the sensor faults and actuator faults can be detected and some of them can be isolated in the frequency domain by setting the frequency characteristics of the post-filters appropriately. Our result is verified by experiments for an actual injection unit.

  15. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150....

  16. Advanced Control and Protection system Design Methods for Modular HTGRs

    SciTech Connect

    Ball, Sydney J; Wilson Jr, Thomas L; Wood, Richard Thomas

    2012-06-01

    The project supported the Nuclear Regulatory Commission (NRC) in identifying and evaluating the regulatory implications concerning the control and protection systems proposed for use in the Department of Energy's (DOE) Next-Generation Nuclear Plant (NGNP). The NGNP, using modular high-temperature gas-cooled reactor (HTGR) technology, is to provide commercial industries with electricity and high-temperature process heat for industrial processes such as hydrogen production. Process heat temperatures range from 700 to 950 C, and for the upper range of these operation temperatures, the modular HTGR is sometimes referred to as the Very High Temperature Reactor or VHTR. Initial NGNP designs are for operation in the lower temperature range. The defining safety characteristic of the modular HTGR is that its primary defense against serious accidents is to be achieved through its inherent properties of the fuel and core. Because of its strong negative temperature coefficient of reactivity and the capability of the fuel to withstand high temperatures, fast-acting active safety systems or prompt operator actions should not be required to prevent significant fuel failure and fission product release. The plant is designed such that its inherent features should provide adequate protection despite operational errors or equipment failure. Figure 1 shows an example modular HTGR layout (prismatic core version), where its inlet coolant enters the reactor vessel at the bottom, traversing up the sides to the top plenum, down-flow through an annular core, and exiting from the lower plenum (hot duct). This research provided NRC staff with (a) insights and knowledge about the control and protection systems for the NGNP and VHTR, (b) information on the technologies/approaches under consideration for use in the reactor and process heat applications, (c) guidelines for the design of highly integrated control rooms, (d) consideration for modeling of control and protection system designs for

  17. Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)

    2003-01-01

    Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.

  18. Object-oriented design of preconditioned iterative methods

    SciTech Connect

    Bruaset, A.M.

    1994-12-31

    In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.

  19. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  20. Analysis of health impact inputs to the US Department of Energy's risk information system

    SciTech Connect

    Droppo, J.G. Jr.; Buck, J.W.; Strenge, D.L.; Siegel, M.R.

    1990-08-01

    The US Department of Energy (DOE) is in the process of completing a survey of environmental problems, referred to as the Environmental Survey, at their facilities across the country. The DOE Risk Information System (RIS) is being used to prioritize these environmental problems identified in the Environmental Survey's findings. This report contains a discussion of site-specific public health risk parameters and the rationale for their inclusion in the RIS. These parameters are based on computed potential impacts obtained with the Multimedia Environmental Pollutant Assessment System (MEPAS). MEPAS is a computer-based methodology for evaluating the potential exposures resulting from multimedia environmental transport of hazardous materials. This report has three related objectives: document the role of MEPAS in the RIS framework, report the results of the analysis of alternative risk parameters that led to the current RIS risk parameters, and describe analysis of uncertainties in the risk-related parameters. 20 refs., 17 figs., 10 tabs.

  1. Personal cancer knowledge and information seeking through PRISM: the planned risk information seeking model.

    PubMed

    Hovick, Shelly R; Kahlor, Leeann; Liang, Ming-Ching

    2014-04-01

    This study retested PRISM, a model of risk information seeking, and found that it is applicable to the context of cancer risk communication. The study, which used an online sample of 928 U.S. adults, also tested the effect of additional variables on that model and found that the original model better fit the data. Among the strongest predictors of cancer information seeking were seeking-related subjective norms, attitude toward seeking, perceived knowledge insufficiency, and affective risk response. Furthermore, risk perception was a strong predictor of an affective risk response. The authors suggest that, given the robustness across studies, the path between seeking-related subjective norms and seeking intention is ready to be implemented in communication practice. PMID:24433251

  2. Category's analysis and operational project capacity method of transformation in design

    NASA Astrophysics Data System (ADS)

    Obednina, S. V.; Bystrova, T. Y.

    2015-10-01

    The method of transformation is attracting widespread interest in fields such contemporary design. However, in theory of design little attention has been paid to a categorical status of the term "transformation". This paper presents the conceptual analysis of transformation based on the theory of form employed in the influential essays by Aristotle and Thomas Aquinas. In the present work the transformation as a method of shaping design has been explored as well as potential application of this term in design has been demonstrated.

  3. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  4. 40 CFR 53.8 - Designation of reference and equivalent methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Designation of reference and equivalent... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8 Designation of reference and equivalent methods. (a) A candidate method determined by the Administrator...

  5. 40 CFR 53.8 - Designation of reference and equivalent methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Designation of reference and equivalent... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8 Designation of reference and equivalent methods. (a) A candidate method determined by the Administrator...

  6. Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues

    ERIC Educational Resources Information Center

    Lieber, Eli

    2009-01-01

    This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…

  7. 40 CFR 53.8 - Designation of reference and equivalent methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Designation of reference and equivalent... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8 Designation of reference and equivalent methods. (a) A candidate method determined by the Administrator...

  8. Cathodic protection design using the regression and correlation method

    SciTech Connect

    Niembro, A.M.; Ortiz, E.L.G.

    1997-09-01

    A computerized statistical method which calculates the current demand requirement based on potential measurements for cathodic protection systems is introduced. The method uses the regression and correlation analysis of statistical measurements of current and potentials of the piping network. This approach involves four steps: field potential measurements, statistical determination of the current required to achieve full protection, installation of more cathodic protection capacity with distributed anodes around the plant and examination of the protection potentials. The procedure is described and recommendations for the improvement of the existing and new cathodic protection systems are given.

  9. Drug safety in pregnancy: utopia or achievable prospect? Risk information, risk research and advocacy in Teratology Information Services.

    PubMed

    Schaefer, Christof

    2011-03-01

    Even though from preclinical testing to drug risk labeling, the situation with drugs in pregnancy has improved substantially since the thalidomide scandal, there is still an increasing need to provide healthcare professionals and patients with updated individualized risk information for clinical decision making. For the majority of drugs, clinical experience is still insufficient with respect to their safety in pregnancy. There is often uncertainty in how to interpret the available scientific data. Based on 20 years of experience with Teratology Information Services (TIS) cooperating in the European Network of Teratology Information Services (ENTIS) methods of risk interpretation, follow-up of exposed pregnancies through the consultation process and their evaluation is discussed. Vitamin K antagonists, isotretinoin and angiotensin (AT) II-receptor-antagonists are presented as examples of misinterpretation of drug risks and subjects of research based on observational clinical data recorded in TIS. As many TIS are poorly funded, advocacy is necessary by establishing contacts with decision makers in health politics and administration, informing them of the high return in terms of health outcomes and cost savings provided by TIS as reference institutions in clinical teratology.

  10. Flight critical system design guidelines and validation methods

    NASA Technical Reports Server (NTRS)

    Holt, H. M.; Lupton, A. O.; Holden, D. G.

    1984-01-01

    Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.

  11. Computer control of large accelerators design concepts and methods

    SciTech Connect

    Beck, F.; Gormley, M.

    1984-05-01

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.

  12. On extracting design principles from biology: I. Method-General answers to high-level design questions for bioinspired robots.

    PubMed

    Haberland, M; Kim, S

    2015-02-02

    When millions of years of evolution suggest a particular design solution, we may be tempted to abandon traditional design methods and copy the biological example. However, biological solutions do not often translate directly into the engineering domain, and even when they do, copying eliminates the opportunity to improve. A better approach is to extract design principles relevant to the task of interest, incorporate them in engineering designs, and vet these candidates against others. This paper presents the first general framework for determining whether biologically inspired relationships between design input variables and output objectives and constraints are applicable to a variety of engineering systems. Using optimization and statistics to generalize the results beyond a particular system, the framework overcomes shortcomings observed of ad hoc methods, particularly those used in the challenging study of legged locomotion. The utility of the framework is demonstrated in a case study of the relative running efficiency of rotary-kneed and telescoping-legged robots.

  13. Fault self-diagnosis designing method of the automotive electronic control system

    NASA Astrophysics Data System (ADS)

    Ding, Yangyan; Yang, Zhigang; Fu, Xiaolin

    2005-12-01

    The fault self-diagnosis system is an important component of an the automotive electronic control system. Designers of automotive electronic control systems urgently require or need a complete understanding of the self-diagnosis designing method of the control system in order to apply it in practice. Aiming at this exigent need, self-diagnosis methods of designing sensors, electronic control unit (ECU), and actuators, which are the three main parts of automotive electronic control systems, are discussed in this paper. According to the fault types and characteristics of commonly used sensors, self-diagnosis designing methods of the sensors are discussed. Then fault diagnosis techniques of sensors utilizing signal detection and analytical redundancy are analysed and summarized respectively, from the viewpoint of the self-diagnosis designing method. Also, problems about failure self-diagnosis of ECU are analyzed here. For different fault types of an ECU, setting up a circuit monitoring method and a self-detection method of the hardware circuit are adopted respectively. Using these two methods mentioned above, a real-time and on-line technique of failure self-diagnosis is presented. Furthermore, the failure self-diagnosis design method of ECU are summarized. Finally, common faults of actuators are analyzed and the general design method of the failure self-diagnosis system is presented. It is suggested that self-diagnosis design methods relative to the failure of automotive electronic control systems can offer a useful approach to designers of control systems.

  14. Risk-Informed Safety Margin Characterization (RISMC): Integrated Treatment of Aleatory and Epistemic Uncertainty in Safety Analysis

    SciTech Connect

    R. W. Youngblood

    2010-10-01

    The concept of “margin” has a long history in nuclear licensing and in the codification of good engineering practices. However, some traditional applications of “margin” have been carried out for surrogate scenarios (such as design basis scenarios), without regard to the actual frequencies of those scenarios, and have been carried out with in a systematically conservative fashion. This means that the effectiveness of the application of the margin concept is determined in part by the original choice of surrogates, and is limited in any case by the degree of conservatism imposed on the evaluation. In the RISMC project, which is part of the Department of Energy’s “Light Water Reactor Sustainability Program” (LWRSP), we are developing a risk-informed characterization of safety margin. Beginning with the traditional discussion of “margin” in terms of a “load” (a physical challenge to system or component function) and a “capacity” (the capability of that system or component to accommodate the challenge), we are developing the capability to characterize probabilistic load and capacity spectra, reflecting both aleatory and epistemic uncertainty in system response. For example, the probabilistic load spectrum will reflect the frequency of challenges of a particular severity. Such a characterization is required if decision-making is to be informed optimally. However, in order to enable the quantification of probabilistic load spectra, existing analysis capability needs to be extended. Accordingly, the INL is working on a next-generation safety analysis capability whose design will allow for much more efficient parameter uncertainty analysis, and will enable a much better integration of reliability-related and phenomenology-related aspects of margin.

  15. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    ERIC Educational Resources Information Center

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  16. The Use of Hermeneutics in a Mixed Methods Design

    ERIC Educational Resources Information Center

    von Zweck, Claudia; Paterson, Margo; Pentland, Wendy

    2008-01-01

    Combining methods in a single study is becoming a more common practice because of the limitations of using only one approach to fully address all aspects of a research question. Hermeneutics in this paper is discussed in relation to a large national study that investigated issues influencing the ability of international graduates to work as…

  17. Application of Six Sigma Method to EMS Design

    NASA Astrophysics Data System (ADS)

    Rusko, Miroslav; Králiková, Ružena

    2011-01-01

    The Six Sigma method is a complex and flexible system of achieving, maintaining and maximizing the business success. Six Sigma is based mainly on understanding the customer needs and expectation, disciplined use of facts and statistics analysis, and responsible approach to managing, improving and establishing new business, manufacturing and service processes.

  18. AI/OR computational model for integrating qualitative and quantitative design methods

    NASA Technical Reports Server (NTRS)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  19. The research progress on Hodograph Method of aerodynamic design at Tsinghua University

    NASA Technical Reports Server (NTRS)

    Chen, Zuoyi; Guo, Jingrong

    1991-01-01

    Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.

  20. Finite Element Method Applied to Fuse Protection Design

    NASA Astrophysics Data System (ADS)

    Li, Sen; Song, Zhiquan; Zhang, Ming; Xu, Liuwei; Li, Jinchao; Fu, Peng; Wang, Min; Dong, Lin

    2014-03-01

    In a poloidal field (PF) converter module, fuse protection is of great importance to ensure the safety of the thyristors. The fuse is pre-selected in a traditional way and then verified by finite element analysis. A 3D physical model is built by ANSYS software to solve the thermal-electric coupled problem of transient process in case of external fault. The result shows that this method is feasible.

  1. Best Estimate Method vs Evaluation Method: a comparison of two techniques in evaluating seismic analysis and design

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-05-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the traditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC) - seismic input, soil-structure interaction, major structural response, and subsystem response - are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on a model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evaluation Method is also demonstrated.

  2. A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits

    NASA Astrophysics Data System (ADS)

    Moradi, Behzad; Mirzaei, Abdolreza

    2016-11-01

    A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.

  3. Matching wind turbine rotors and loads: computational methods for designers

    SciTech Connect

    Seale, J.B.

    1983-04-01

    This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

  4. Design of a Password-Based EAP Method

    NASA Astrophysics Data System (ADS)

    Manganaro, Andrea; Koblensky, Mingyur; Loreti, Michele

    In recent years, amendments to IEEE standards for wireless networks added support for authentication algorithms based on the Extensible Authentication Protocol (EAP). Available solutions generally use digital certificates or pre-shared keys but the management of the resulting implementations is complex or unlikely to be scalable. In this paper we present EAP-SRP-256, an authentication method proposal that relies on the SRP-6 protocol and provides a strong password-based authentication mechanism. It is intended to meet the IETF security and key management requirements for wireless networks.

  5. Defining Requirements and Related Methods for Designing Sensorized Garments

    PubMed Central

    Andreoni, Giuseppe; Standoli, Carlo Emilio; Perego, Paolo

    2016-01-01

    Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user’s age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors—also influencing user comfort—are elasticity and washability, while more technical properties are the stability of the chemical agents’ effects for preserving the sensors’ efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability. PMID:27240361

  6. Simplified tornado depressurization design methods for nuclear power plants

    SciTech Connect

    Howard, N.M.; Krasnopoler, M.I.

    1983-05-01

    A simplified approach for the calculation of tornado depressurization effects on nuclear power plant structures and components is based on a generic computer depressurization analysis for an arbitrary single volume V connected to the atmosphere by an effective vent area A. For a given tornado depressurization transient, the maximum depressurization ..delta..P of the volume was found to depend on the parameter V/A. The relation between ..delta..P and V/A can be represented by a single monotonically increasing curve for each of the three design-basis tornadoes described in the U.S. Nuclear Regulatory Commission's Regulatory Guide 1.76. These curves can be applied to most multiple-volume nuclear power plant structures by considering each volume and its controlling vent area. Where several possible flow areas could be controlling, the maximum value of V/A can be used to estimate a conservative value for ..delta..P. This simplified approach was shown to yield reasonably conservative results when compared to detailed computer calculations of moderately complex geometries. Treatment of severely complicated geometries, heating and ventilation systems, and multiple blowout panel arrangements were found to be beyond the limitations of the simplified analysis.

  7. The ZInEP Epidemiology Survey: background, design and methods.

    PubMed

    Ajdacic-Gross, Vladeta; Müller, Mario; Rodgers, Stephanie; Warnke, Inge; Hengartner, Michael P; Landolt, Karin; Hagenmuller, Florence; Meier, Magali; Tse, Lee-Ting; Aleksandrowicz, Aleksandra; Passardi, Marco; Knöpfli, Daniel; Schönfelder, Herdis; Eisele, Jochen; Rüsch, Nicolas; Haker, Helene; Kawohl, Wolfram; Rössler, Wulf

    2014-12-01

    This article introduces the design, sampling, field procedures and instruments used in the ZInEP Epidemiology Survey. This survey is one of six ZInEP projects (Zürcher Impulsprogramm zur nachhaltigen Entwicklung der Psychiatrie, i.e. the "Zurich Program for Sustainable Development of Mental Health Services"). It parallels the longitudinal Zurich Study with a sample comparable in age and gender, and with similar methodology, including identical instruments. Thus, it is aimed at assessing the change of prevalence rates of common mental disorders and the use of professional help and psychiatric sevices. Moreover, the current survey widens the spectrum of topics by including sociopsychiatric questionnaires on stigma, stress related biological measures such as load and cortisol levels, electroencephalographic (EEG) and near-infrared spectroscopy (NIRS) examinations with various paradigms, and sociophysiological tests. The structure of the ZInEP Epidemiology Survey entails four subprojects: a short telephone screening using the SCL-27 (n of nearly 10,000), a comprehensive face-to-face interview based on the SPIKE (Structured Psychopathological Interview and Rating of the Social Consequences for Epidemiology: the main instrument of the Zurich Study) with a stratified sample (n = 1500), tests in the Center for Neurophysiology and Sociophysiology (n = 227), and a prospective study with up to three follow-up interviews and further measures (n = 157). In sum, the four subprojects of the ZInEP Epidemiology Survey deliver a large interdisciplinary database. PMID:24942564

  8. Defining Requirements and Related Methods for Designing Sensorized Garments.

    PubMed

    Andreoni, Giuseppe; Standoli, Carlo Emilio; Perego, Paolo

    2016-01-01

    Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user's age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors-also influencing user comfort-are elasticity and washability, while more technical properties are the stability of the chemical agents' effects for preserving the sensors' efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability. PMID:27240361

  9. NMR quantum computing: applying theoretical methods to designing enhanced systems.

    PubMed

    Mawhinney, Robert C; Schreckenbach, Georg

    2004-10-01

    Density functional theory results for chemical shifts and spin-spin coupling constants are presented for compounds currently used in NMR quantum computing experiments. Specific design criteria were examined and numerical guidelines were assessed. Using a field strength of 7.0 T, protons require a coupling constant of 4 Hz with a chemical shift separation of 0.3 ppm, whereas carbon needs a coupling constant of 25 Hz for a chemical shift difference of 10 ppm, based on the minimal coupling approximation. Using these guidelines, it was determined that 2,3-dibromothiophene is limited to only two qubits; the three qubit system bromotrifluoroethene could be expanded to five qubits and the three qubit system 2,3-dibromopropanoic acid could also be used as a six qubit system. An examination of substituent effects showed that judiciously choosing specific groups could increase the number of available qubits by removing rotational degeneracies in addition to introducing specific conformational preferences that could increase (or decrease) the magnitude of the couplings. The introduction of one site of unsaturation can lead to a marked improvement in spectroscopic properties, even increasing the number of active nuclei.

  10. A Computational Method for Materials Design of New Interfaces

    NASA Astrophysics Data System (ADS)

    Kaminski, Jakub; Ratsch, Christian; Weber, Justin; Haverty, Michael; Shankar, Sadasivan

    2015-03-01

    We propose a novel computational approach to explore the broad configurational space of possible interfaces formed from known crystal structures to find new heterostructure materials with potentially interesting properties. In a series of steps with increasing complexity and accuracy, the vast number of possible combinations is narrowed down to a limited set of the most promising and chemically compatible candidates. This systematic screening encompasses (i) establishing the geometrical compatibility along multiple crystallographic orientations of two materials, (ii) simple functions eliminating configurations with unfavorable interatomic steric conflicts, (iii) application of empirical and semi-empirical potentials estimating approximate energetics and structures, (iv) use of DFT based quantum-chemical methods to ascertain the final optimal geometry and stability of the interface in question. For efficient high-throughput screening we have developed a new method to calculate surface energies, which allows for fast and systematic treatment of materials terminated with non-polar surfaces. We show that our approach leads to a maximum error around 3% from the exact reference. The representative results from our search protocol will be presented for selected materials including semiconductors and oxides.

  11. A Comparison of Five Statistical Methods for Analyzing Pretest-Posttest Designs.

    ERIC Educational Resources Information Center

    Hendrix, Leland J.; And Others

    1978-01-01

    Five methods for analyzing data from pretest-post-test research designs are discussed. Analysis of gain scores, with pretests as a covariate, is indicated as a superior method when the assumptions underlying covariance analysis are met. (Author/GDC)

  12. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  13. Design of a Variational Multiscale Method for Turbulent Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.

  14. The C8 Health Project: Design, Methods, and Participants

    PubMed Central

    Frisbee, Stephanie J.; Brooks, A. Paul; Maher, Arthur; Flensborg, Patsy; Arnold, Susan; Fletcher, Tony; Steenland, Kyle; Shankar, Anoop; Knox, Sarah S.; Pollard, Cecil; Halverson, Joel A.; Vieira, Verónica M.; Jin, Chuanfang; Leyden, Kevin M.; Ducatman, Alan M.

    2009-01-01

    Background The C8 Health Project was created, authorized, and funded as part of the settlement agreement reached in the case of Jack W. Leach, et al. v. E.I. du Pont de Nemours & Company (no. 01-C-608 W.Va., Wood County Circuit Court, filed 10 April 2002). The settlement stemmed from the perfluorooctanoic acid (PFOA, or C8) contamination of drinking water in six water districts in two states near the DuPont Washington Works facility near Parkersburg, West Virginia. Objectives This study reports on the methods and results from the C8 Health Project, a population study created to gather data that would allow class members to know their own PFOA levels and permit subsequent epidemiologic investigations. Methods Final study participation was 69,030, enrolled over a 13-month period in 2005–2006. Extensive data were collected, including demographic data, medical diagnoses (both self-report and medical records review), clinical laboratory testing, and determination of serum concentrations of 10 perfluorocarbons (PFCs). Here we describe the processes used to collect, validate, and store these health data. We also describe survey participants and their serum PFC levels. Results The population geometric mean for serum PFOA was 32.91 ng/mL, 500% higher than previously reported for a representative American population. Serum concentrations for perfluorohexane sulfonate and perfluorononanoic acid were elevated 39% and 73% respectively, whereas perfluorooctanesulfonate was present at levels similar to those in the U.S. population. Conclusions This largest known population study of community PFC exposure permits new evaluations of associations between PFOA, in particular, and a range of health parameters. These will contribute to understanding of the biology of PFC exposure. The C8 Health Project also represents an unprecedented effort to gather basic data on an exposed population; its achievements and limitations can inform future legal settlements for populations exposed to

  15. Matching Learning Style Preferences with Suitable Delivery Methods on Textile Design Programmes

    ERIC Educational Resources Information Center

    Sayer, Kate; Studd, Rachel

    2006-01-01

    Textile design is a subject that encompasses both design and technology; aesthetically pleasing patterns and forms must be set within technical parameters to create successful fabrics. When considering education methods in design programmes, identifying the most relevant learning approach is key to creating future successes. Yet are the most…

  16. Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Spector, J. Michael

    ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…

  17. Co-Designing and Co-Teaching Graduate Qualitative Methods: An Innovative Ethnographic Workshop Model

    ERIC Educational Resources Information Center

    Cordner, Alissa; Klein, Peter T.; Baiocchi, Gianpaolo

    2012-01-01

    This article describes an innovative collaboration between graduate students and a faculty member to co-design and co-teach a graduate-level workshop-style qualitative methods course. The goal of co-designing and co-teaching the course was to involve advanced graduate students in all aspects of designing a syllabus and leading class discussions in…

  18. Application of the MNA design method to a nonlinear turbofan engine. [multivariable Nyquist array method

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1981-01-01

    Using nonlinear digital simulation as a representative model of the dynamic operation of the QCSEE turbofan engine, a feedback control system is designed by variable frequency design techniques. Transfer functions are generated for each of five power level settings covering the range of operation from approach power to full throttle (62.5% to 100% full power). These transfer functions are then used by an interactive control system design synthesis program to provide a closed loop feedback control using the multivariable Nyquist array and extensions to multivariable Bode diagrams and Nichols charts.

  19. The Use of Qsar and Computational Methods in Drug Design

    NASA Astrophysics Data System (ADS)

    Bajot, Fania

    The application of quantitative structure-activity relationships (QSARs) has significantly impacted the paradigm of drug discovery. Following the successful utilization of linear solvation free-energy relationships (LSERs), numerous 2D- and 3D-QSAR methods have been developed, most of them based on descriptors for hydrophobicity, polarizability, ionic interactions, and hydrogen bonding. QSAR models allow for the calculation of physicochemical properties (e.g., lipophilicity), the prediction of biological activity (or toxicity), as well as the evaluation of absorption, distribution, metabolism, and excretion (ADME). In pharmaceutical research, QSAR has a particular interest in the preclinical stages of drug discovery to replace tedious and costly experimentation, to filter large chemical databases, and to select drug candidates. However, to be part of drug discovery and development strategies, QSARs need to meet different criteria (e.g., sufficient predictivity). This chapter describes the foundation of modern QSAR in drug discovery and presents some current challenges and applications for the discovery and optimization of drug candidates

  20. Unique Method for Generating Design Earthquake Time History Seeds

    SciTech Connect

    R. E. Spears

    2008-07-01

    A method has been developed which takes a single seed earthquake time history and produces multiple similar seed earthquake time histories. These new time histories possess important frequency and cumulative energy attributes of the original while having a correlation less than 30% (per the ASCE/SEI 43-05 Section 2.4 [1]). They are produced by taking the fast Fourier transform of the original seed. The averaged amplitudes are then pared with random phase angles and the inverse fast Fourier transform is taken to produce a new time history. The average amplitude through time is then adjusted to encourage a similar cumulative energy curve. Next, the displacement is modified to approximate the original curve using Fourier techniques. Finally, the correlation is checked to ensure it is less than 30%. This process does not guarantee that the correlation will be less than 30% for all of a given set of new curves. It does provide a simple tool where a few additional iterations of the process should produce a set of seed earthquake time histories meeting the correlation criteria.

  1. A Computational Method for Materials Design of Interfaces

    NASA Astrophysics Data System (ADS)

    Kaminski, Jakub; Ratsch, Christian; Shankar, Sadasivan

    2014-03-01

    In the present work we propose a novel computational approach to explore the broad configurational space of possible interfaces formed from known crystal structures to find new hetrostructure materials with potentially interesting properties. In the series of subsequent steps with increasing complexity and accuracy, the vast number of possible combinations is narrowed down to a limited set of the most promising and chemically compatible candidates. This systematic screening encompasses (i) establishing the geometrical compatibility along multiple crystallographic orientations of two (or more) materials, (ii) simple functions eliminating configurations with unfavorable interatomic steric conflicts, (iii) application of empirical and semi-empirical potentials estimating approximate energetics and structures, (iv) use of DFT based quantum-chemical methods to ascertain the final optimal geometry and stability of the interface in question. We also demonstrate the flexibility and efficiency of our approach depending on the size of the investigated structures and size of the search space. The representative results from our search protocol will be presented for selected materials including semiconductors, transition metal systems, and oxides.

  2. Mixture design and treatment methods for recycling contaminated sediment.

    PubMed

    Wang, Lei; Kwok, June S H; Tsang, Daniel C W; Poon, Chi-Sun

    2015-01-01

    Conventional marine disposal of contaminated sediment presents significant financial and environmental burden. This study aimed to recycle the contaminated sediment by assessing the roles and integration of binder formulation, sediment pretreatment, curing method, and waste inclusion in stabilization/solidification. The results demonstrated that the 28-d compressive strength of sediment blocks produced with coal fly ash and lime partially replacing cement at a binder-to-sediment ratio of 3:7 could be used as fill materials for construction. The X-ray diffraction analysis revealed that hydration products (calcium hydroxide) were difficult to form at high sediment content. Thermal pretreatment of sediment removed 90% of indigenous organic matter, significantly increased the compressive strength, and enabled reuse as non-load-bearing masonry units. Besides, 2-h CO2 curing accelerated early-stage carbonation inside the porous structure, sequestered 5.6% of CO2 (by weight) in the sediment blocks, and acquired strength comparable to 7-d curing. Thermogravimetric analysis indicated substantial weight loss corresponding to decomposition of poorly and well crystalline calcium carbonate. Moreover, partial replacement of contaminated sediment by various granular waste materials notably augmented the strength of sediment blocks. The metal leachability of sediment blocks was minimal and acceptable for reuse. These results suggest that contaminated sediment should be viewed as useful resources.

  3. Mixture design and treatment methods for recycling contaminated sediment.

    PubMed

    Wang, Lei; Kwok, June S H; Tsang, Daniel C W; Poon, Chi-Sun

    2015-01-01

    Conventional marine disposal of contaminated sediment presents significant financial and environmental burden. This study aimed to recycle the contaminated sediment by assessing the roles and integration of binder formulation, sediment pretreatment, curing method, and waste inclusion in stabilization/solidification. The results demonstrated that the 28-d compressive strength of sediment blocks produced with coal fly ash and lime partially replacing cement at a binder-to-sediment ratio of 3:7 could be used as fill materials for construction. The X-ray diffraction analysis revealed that hydration products (calcium hydroxide) were difficult to form at high sediment content. Thermal pretreatment of sediment removed 90% of indigenous organic matter, significantly increased the compressive strength, and enabled reuse as non-load-bearing masonry units. Besides, 2-h CO2 curing accelerated early-stage carbonation inside the porous structure, sequestered 5.6% of CO2 (by weight) in the sediment blocks, and acquired strength comparable to 7-d curing. Thermogravimetric analysis indicated substantial weight loss corresponding to decomposition of poorly and well crystalline calcium carbonate. Moreover, partial replacement of contaminated sediment by various granular waste materials notably augmented the strength of sediment blocks. The metal leachability of sediment blocks was minimal and acceptable for reuse. These results suggest that contaminated sediment should be viewed as useful resources. PMID:25464304

  4. Design studies for the transmission simulator method of experimental dynamic substructuring.

    SciTech Connect

    Mayes, Randall Lee; Arviso, Michael

    2010-05-01

    In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.

  5. Scenario-based design: A method for connecting information system design with public health operations and emergency management

    PubMed Central

    Reeder, Blaine; Turner, Anne M

    2011-01-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

  6. Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method

    NASA Technical Reports Server (NTRS)

    Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.

    2003-01-01

    A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.

  7. Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment

    NASA Technical Reports Server (NTRS)

    Rowell, Lawrence F.; Korte, John J.

    2003-01-01

    NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.

  8. A method for designing fiberglass sucker-rod strings with API RP 11L

    SciTech Connect

    Jennings, J.W.; Laine, R.E. )

    1991-02-01

    This paper presents a method for using the API recommended practice for the design of sucker-rod pumping systems with fiberglass composite rod strings. The API method is useful for obtaining quick, approximate, preliminary design calculations. Equations for calculating all the composite material factors needed in the API calculations are given.

  9. Application of Skeleton Method in Interconnection of Cae Programs Used in Vehicle Design

    NASA Astrophysics Data System (ADS)

    Bucha, Jozef; Gavačová, Jana; Milesich, Tomáš

    2014-12-01

    This paper deals with the application of the skeleton method as the main element of interconnection of CAE programs involved in the process of vehicle design. This article focuses on the utilization of the skeleton method for mutual connection of CATIA V5 and ADAMS/CAR. Both programs can be used simultaneously during various stages of vehicle design.

  10. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    SciTech Connect

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  11. Method and tool for generating and managing image quality allocations through the design and development process

    NASA Astrophysics Data System (ADS)

    Sparks, Andrew W.; Olson, Craig; Theisen, Michael J.; Addiego, Chris J.; Hutchins, Tiffany G.; Goodman, Timothy D.

    2016-05-01

    Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.

  12. Fracture control methods for space vehicles. Volume 1: Fracture control design methods. [for space shuttle configuration planning

    NASA Technical Reports Server (NTRS)

    Liu, A. F.

    1974-01-01

    A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.

  13. Segment and spline synthesis optimization method for LED based freeform total-internal-reflection lens design

    NASA Astrophysics Data System (ADS)

    Chen, Enguo; Zhuang, Zhenfeng; Cai, Jin; Liu, Yan; Yu, Feihong

    2012-10-01

    This paper presents a segment and spline synthesis optimization method (SSS method) for the freeform total-internal-reflection (TIR) lens design. Before the optimization starts, a series of discrete control points are used to describe the TIR lens profile. In order to realize initial optimization, the segment method is applied to optimize a linear-segmented TIR lens. The final optimization is further achieved by the spline optimization method, after which the cubic-spline-modeling TIR lens with the characteristic of low cost and easy fabrication could satisfy the target illumination requirements. The detailed design principle and optimization process of the SSS method are both analyzed and compared in the paper. Complementing each other, the synthesis of the segment and spline optimization method could realize the prescribed design and greatly improve the design efficiency for designers. As an example, the specially designed polymethyl methacrylate (PMMA) freeform TIR lens used for LED general lighting could demonstrate the effectiveness of this method. The uniformity of the lens significantly increases from 67% to 88% after the segment and spline method, respectively. High light output efficiency (LOE) of 99.3% is available within the target illumination area for the final lens system. It is believed that the SSS method could be applied to design other freeform illumination optics.

  14. Risk-Informed Decision Making; Application to the Technology Development Alternative Selection

    NASA Astrophysics Data System (ADS)

    Dezfuli, Homayoon; Maggio, Gaspare; Everett, Christopher

    2010-09-01

    NASA NPR 8000.4A, Agency Risk Management Procedural Requirements, defines risk management in terms of two complementary processes: Risk-informed Decision Making(RIDM) and Continuous Risk Management(CRM). The RIDM process is used to inform decision making by emphasizing proper use of risk analysis to make decisions that impact all mission execution domains(e.g., safety, technical, cost, and schedule) for program/projects and mission support organizations. The RIDM process supports the selection of an alternative prior to program commitment. The CRM process is used to manage risk associated with the implementation of the selected alternative. The two processes work together to foster proactive risk management at NASA. The Office of Safety and Mission Assurance at NASA Headquarters has developed a technical handbook to provide guidance for implementing the RIDM process in the context of NASA risk management and systems engineering. This paper summarizes the key concepts and procedures of the RIDM process as presented in the handbook, and also illustrates how the RIDM process can be applied to the selection of technology investments as NASA’s new technology development programs are initiated.

  15. Risk-Informed Decision Making: Application to Technology Development Alternative Selection

    NASA Technical Reports Server (NTRS)

    Dezfuli, Homayoon; Maggio, Gaspare; Everett, Christopher

    2010-01-01

    NASA NPR 8000.4A, Agency Risk Management Procedural Requirements, defines risk management in terms of two complementary processes: Risk-informed Decision Making (RIDM) and Continuous Risk Management (CRM). The RIDM process is used to inform decision making by emphasizing proper use of risk analysis to make decisions that impact all mission execution domains (e.g., safety, technical, cost, and schedule) for program/projects and mission support organizations. The RIDM process supports the selection of an alternative prior to program commitment. The CRM process is used to manage risk associated with the implementation of the selected alternative. The two processes work together to foster proactive risk management at NASA. The Office of Safety and Mission Assurance at NASA Headquarters has developed a technical handbook to provide guidance for implementing the RIDM process in the context of NASA risk management and systems engineering. This paper summarizes the key concepts and procedures of the RIDM process as presented in the handbook, and also illustrates how the RIDM process can be applied to the selection of technology investments as NASA's new technology development programs are initiated.

  16. Supersonic Aerodynamic Design Improvements of an Arrow-Wing HSCT Configuration Using Nonlinear Point Design Methods

    NASA Technical Reports Server (NTRS)

    Unger, Eric R.; Hager, James O.; Agrawal, Shreekant

    1999-01-01

    This paper is a discussion of the supersonic nonlinear point design optimization efforts at McDonnell Douglas Aerospace under the High-Speed Research (HSR) program. The baseline for these optimization efforts has been the M2.4-7A configuration which represents an arrow-wing technology for the High-Speed Civil Transport (HSCT). Optimization work on this configuration began in early 1994 and continued into 1996. Initial work focused on optimization of the wing camber and twist on a wing/body configuration and reductions of 3.5 drag counts (Euler) were realized. The next phase of the optimization effort included fuselage camber along with the wing and a drag reduction of 5.0 counts was achieved. Including the effects of the nacelles and diverters into the optimization problem became the next focus where a reduction of 6.6 counts (Euler W/B/N/D) was eventually realized. The final two phases of the effort included a large set of constraints designed to make the final optimized configuration more realistic and they were successful albeit with a loss of performance.

  17. A Systematic Composite Service Design Modeling Method Using Graph-Based Theory

    PubMed Central

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358

  18. A systematic composite service design modeling method using graph-based theory.

    PubMed

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  19. A new method for designing dual foil electron beam forming systems. II. Feasibility of practical implementation of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.

  20. 10 CFR 50.69 - Risk-informed categorization and treatment of structures, systems and components for nuclear...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., systems and components for nuclear power reactors. 50.69 Section 50.69 Energy NUCLEAR REGULATORY..., systems and components for nuclear power reactors. (a) Definitions. Risk-Informed Safety Class (RISC)-1... holder of a license to operate a light water reactor (LWR) nuclear power plant under this part; a...

  1. Design predictions and diagnostic test methods for hydronic heating systems in ASHRAE standard 152P

    SciTech Connect

    Andrews, J.W.

    1996-04-01

    A new method of test for residential thermal distribution efficiency is currently being developed under the auspices of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The initial version of this test method is expected to have two main approaches, or ``pathways,`` designated Design and Diagnostic. The Design Pathway will use builder`s information to predict thermal distribution efficiency in new construction. The Diagnostic Pathway will use simple tests to evaluate thermal distribution efficiency in a completed house. Both forced-air and hydronic systems are included in the test method. This report describes an approach to predicting and measuring thermal distribution efficiency for residential hydronic heating systems for use in the Design and Diagnostic Pathways of the test method. As written, it is designed for single-loop systems with any type of passive radiation/convection (baseboard or radiators). Multiloop capability may be added later.

  2. Free-form surface design method for nonaxial-symmetrical reflectors producing arbitrary image patterns

    NASA Astrophysics Data System (ADS)

    Tsai, Chung-Yu

    2016-07-01

    A free-form (FF) surface design method is proposed for a nonaxial-symmetrical projector system comprising an FF reflector and a light source. The profile of the reflector is designed using a nonaxial-symmetrical FF (NFF) surface construction method such that each incident ray is directed in such a way as to form a user-specified image pattern on the target region of the image plane. The light ray paths within the projection system are analyzed using an exact analytical model and a skew-ray tracing approach. The validity of the proposed NFF design method is demonstrated by means of ZEMAX simulations. It is shown that the image pattern formed on the target region of the image plane is in good agreement with that specified by the user. The NFF method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis stages of optical systems design.

  3. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.

  4. A Method for the Constrained Design of Natural Laminar Flow Airfoils

    NASA Technical Reports Server (NTRS)

    Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.

    1996-01-01

    A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.

  5. Optical design method of freeform lens for a high-power extended LED source

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Du, Naifeng; Wu, Yuefeng; Huang, Huamao

    2012-10-01

    In view of limitation for LED optical design method as ideal point source, a new uniform illumination optical design method of freeform lens for a high-power LED is present in the paper.By establishing an energy corresponding relationship between the extended LED source and the point illumination of the receiving surface, a freeform lens optical model achieving uniform illumination in target plane is obtained.The optical simulation results of uniform light intensity curve of the model are compared with the one designed by an approximate point source method. The results show that the new method can effectively overcome the shortages from the point source design.It is more accurately to control the correspondence relationship of light energy and the outgoing direction of light.The illumination uniformity of the freeform lens is greater than 75% and also meets the design requirements.

  6. A randomized trial of the clinical utility of genetic testing for obesity: Design and implementation considerations

    PubMed Central

    Wang, Catharine; Gordon, Erynn S.; Stack, Catharine B.; Liu, Ching-Ti; Norkunas, Tricia; Wawak, Lisa; Christman, Michael F.; Green, Robert C.; Bowen, Deborah J.

    2013-01-01

    Background Obesity rates in the United States have escalated in recent decades and present a major challenge in public health prevention efforts. Currently, testing to identify genetic risk for obesity is readily available through several direct-to-consumer companies. Despite the availability of this type of testing, there is a paucity of evidence as to whether providing people with personal genetic information on obesity risk will facilitate or impede desired behavioral responses. Purpose We describe the key issues in the design and implementation of a randomized controlled trial examining the clinical utility of providing genetic risk information for obesity. Methods Participants are being recruited from the Coriell Personalized Medicine Collaborative, an ongoing, longitudinal research cohort study designed to determine the utility of personal genome information in health management and clinical decision-making. The primary focus of the ancillary Obesity Risk Communication Study is to determine whether genetic risk information added value to traditional communication efforts for obesity, which are based on lifestyle risk factors. The trial employs a 2x2 factorial design in order to examine the effects of providing genetic risk information for obesity, alone or in combination with lifestyle risk information, on participants’ psychological responses, behavioral intentions, health behaviors, and weight. Results The factorial design generated four experimental arms based on communication of estimated risk to participants: 1) no risk feedback (control), 2) genetic risk only, 3) lifestyle risk only, 4) both genetic and lifestyle risk (combined). Key issues in study design pertained to the selection of algorithms to estimate lifestyle risk and determination of information to be provided to participants assigned to each experimental arm to achieve a balance between clinical standards and methodological rigor. Following the launch of the trial in September 2011

  7. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  8. Design of subwavelength binary micro-optics using a gradient optimization method

    NASA Astrophysics Data System (ADS)

    Nesterenko, Dmitry V.; Kotlyar, Victor V.

    2001-12-01

    Various rigorous methods have been developed for the efficient analysis of diffractive optical elements (DOEs). We apply a gradient algorithm of synthesis to design two-dimensional DOEs, with the diffraction of the electromagnetic wave of TE polarization using a hybrid finite element - boundary element method. The hybrid method is capable of modeling inhomogeneous DOEs in unbounded free space in a computationally efficient manner. In this paper we discuss the application of the gradient optimization method to the matrix notation of the hybrid method. Such an application makes it possible to analyze DOE profiles with a large number of features. This allows one to overcome the limitations of calculation time dependent on the amount of the DOE modifications. We use the gradient method to design binary-phase lenses with subwavelength features. Although we have considered only binary-phase lenses, the gradient method presented is also suitable for designing continuous-relief DOEs.

  9. Design-based mask metrology hot spot classification and recipe making through random pattern recognition method

    NASA Astrophysics Data System (ADS)

    Cui, Ying; Baik, Kiho; Gleason, Bob; Tavassoli, Malahat

    2006-10-01

    Design Based Metrology (DBM) requires an integrated process from design to metrology, and the very first and key step of this integration is to translate design CD lists to metrology measurement recipes. Design CD lists can come from different sources, such as design rule check, OPC validation, or yield analysis. These design CD lists can not be directly used to create metrology tool recipes, since tool recipe makers usually require specific information of each CD site, or a measurement matrix. The manual process to identify measurement matrix for each design CD site can be very difficult, especially when the list is in hundreds or more. This paper will address this issue and propose a method to automate Design CD Identification (DCDI), using a new CD Pattern Vector (CDPV) library.

  10. A Government/Industry Summary of the Design Analysis Methods for Vibrations (DAMVIBS) Program

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G. (Compiler)

    1993-01-01

    The NASA Langley Research Center in 1984 initiated a rotorcraft structural dynamics program, designated DAMVIBS (Design Analysis Methods for VIBrationS), with the objective of establishing the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. An assessment of the program showed that the DAMVIBS Program has resulted in notable technical achievements and major changes in industrial design practice, all of which have significantly advanced the industry's capability to use and rely on finite-element-based dynamics analyses during the design process.

  11. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  12. Needs and Opportunities for Uncertainty-Based Multidisciplinary Design Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson

    2002-01-01

    This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.

  13. Layer-by-layer design method for soft-X-ray multilayers

    NASA Technical Reports Server (NTRS)

    Yamamoto, Masaki; Namioka, Takeshi

    1992-01-01

    A new design method effective for a nontransparent system has been developed for soft-X-ray multilayers with the aid of graphic representation of the complex amplitude reflectance in a Gaussian plane. The method provides an effective means of attaining the absolute maximum reflectance on a layer-by-layer basis and also gives clear insight into the evolution of the amplitude reflectance on a multilayer as it builds up. An optical criterion is derived for the selection of a proper pair of materials needed for designing a high-reflectance multilayer. Some examples are given to illustrate the usefulness of this design method.

  14. Key techniques and applications of adaptive growth method for stiffener layout design of plates and shells

    NASA Astrophysics Data System (ADS)

    Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun

    2013-11-01

    The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.

  15. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  16. Use of epidemiologic data in Integrated Risk Information System (IRIS) assessments

    SciTech Connect

    Persad, Amanda S.; Cooper, Glinda S.

    2008-11-15

    In human health risk assessment, information from epidemiologic studies is typically utilized in the hazard identification step of the risk assessment paradigm. However, in the assessment of many chemicals by the Integrated Risk Information System (IRIS), epidemiologic data, both observational and experimental, have also been used in the derivation of toxicological risk estimates (i.e., reference doses [RfD], reference concentrations [RfC], oral cancer slope factors [CSF] and inhalation unit risks [IUR]). Of the 545 health assessments posted on the IRIS database as of June 2007, 44 assessments derived non-cancer or cancer risk estimates based on human data. RfD and RfC calculations were based on a spectrum of endpoints from changes in enzyme activity to specific neurological or dermal effects. There are 12 assessments with IURs based on human data, two assessments that extrapolated human inhalation data to derive CSFs and one that used human data to directly derive a CSF. Lung or respiratory cancer is the most common endpoint for cancer assessments based on human data. To date, only one chemical, benzene, has utilized human data for derivation of all three quantitative risk estimates (i.e., RfC, RfD, and dose-response modeling for cancer assessment). Through examples from the IRIS database, this paper will demonstrate how epidemiologic data have been used in IRIS assessments for both adding to the body of evidence in the hazard identification process and in the quantification of risk estimates in the dose-response component of the risk assessment paradigm.

  17. Hybrid airfoil design methods for full-scale ice accretion simulation

    NASA Astrophysics Data System (ADS)

    Saeed, Farooq

    The objective of this thesis is to develop a design method together with a design philosophy that allows the design of "subscale" or "hybrid" airfoils that simulate fullscale ice accretions. These subscale or hybrid airfoils have full-scale leading edges and redesigned aft-sections. A preliminary study to help develop a design philosophy for the design of hybrid airfoils showed that hybrid airfoils could be designed to simulate full-scale airfoil droplet-impingement characteristics and, therefore, ice accretion. The study showed that the primary objective in such a design should be to determine the aft section profile that provides the circulation necessary for simulating full-scale airfoil droplet-impingement characteristics. The outcome of the study, therefore, reveals circulation control as the main design variable. To best utilize this fact, this thesis describes two innovative airfoil design methods for the design of hybrid airfoils. Of the two design methods, one uses a conventional flap system while the other only suggests the use of boundary-layer control through slot-suction on the airfoil upper surface as a possible alternative for circulation control. The formulation of each of the two design methods is described in detail, and the results from each method are validated using wind-tunnel test data. The thesis demonstrates the capabilities of each method with the help of specific design examples highlighting their application potential. In particular, the flap-system based hybrid airfoil design method is used to demonstrate the design of a half-scale hybrid model of a full-scale airfoil that simulates full-scale ice accretion at both the design and off-design conditions. The full-scale airfoil used is representative of a scaled modern business-jet main wing section. The study suggests some useful advantages of using hybrid airfoils as opposed to full-scale airfoils for a better understanding of the ice accretion process and the related issues. Results

  18. A New Method to Design Cam Used in Automobile Heating, Ventilating and Cooling System

    NASA Astrophysics Data System (ADS)

    Singh, B.; Singh, D.; Saini, J. S.

    2012-10-01

    With the automotive air-conditioning industry aiming at better levels of quality, cost effectiveness and short time to market, the need for simulation is at an all time high. In the present study, the airflow control mechanism of an automotive heating, ventilating and cooling module for opening various doors/dampers were kinematically analyzed. A new method for cam design was developed which is faster and simpler than the existing oscillating link method. The existing design was modified for the same output using the new cam design method. It is shown that the torque required in the modified design is lesser than that in the existing design, thus lowering the effort required to rotate the cam from the control panel.

  19. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  20. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  1. Development of a conceptual flight vehicle design weight estimation method library and documentation

    NASA Astrophysics Data System (ADS)

    Walker, Andrew S.

    The state of the art in estimating the volumetric size and mass of flight vehicles is held today by an elite group of engineers in the Aerospace Conceptual Design Industry. This is not a skill readily accessible or taught in academia. To estimate flight vehicle mass properties, many aerospace engineering students are encouraged to read the latest design textbooks, learn how to use a few basic statistical equations, and plunge into the details of parametric mass properties analysis. Specifications for and a prototype of a standardized engineering "tool-box" of conceptual and preliminary design weight estimation methods were developed to manage the growing and ever-changing body of weight estimation knowledge. This also bridges the gap in Mass Properties education for aerospace engineering students. The Weight Method Library will also be used as a living document for use by future aerospace students. This "tool-box" consists of a weight estimation method bibliography containing unclassified, open-source literature for conceptual and preliminary flight vehicle design phases. Transport aircraft validation cases have been applied to each entry in the AVD Weight Method Library in order to provide a sense of context and applicability to each method. The weight methodology validation results indicate consensus and agreement of the individual methods. This generic specification of a method library will be applicable for use by other disciplines within the AVD Lab, Post-Graduate design labs, or engineering design professionals.

  2. Best estimate method versus evaluation method: a comparison of two techniques in evaluating seismic analysis and design. Technical report

    SciTech Connect

    Bumpus, S.E.; Johnson, J.J.; Smith, P.D.

    1980-07-01

    The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the tradditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC)--seismic input, soil-structure interaction, major structural response, and subsystem response--are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on the model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evauation Method is also demonstrated.

  3. The Transformation Design Method and Metamaterials: Tools to Realize Invisibility and Other Interesting Effects

    SciTech Connect

    Schurig, David

    2007-02-21

    I will explain how the transformation design method can yield a material specification that possesses the same electromagnetic behavior as a fairly general set of imagined space-time topologies. This method has been used to design invisibility cloaks, but the method is quite general and can be used to design a wide variety of interesting devices that guide, concentrate or shape electromagnetic fields in ways that would be difficult to manage with other design methodologies. Applications range from stealth to energy conversion and distribution to wireless communications to biomedical imaging. The drawback of the method is the complexity of the material specifications that it produces, which are in general anisotropic and inhomogeneous. Only with recent advances in the field of metamaterials can these specifications be realized. I will discuss how metamaterials accomplish this and what their limitations are, e.g. bandwidth, loss, frequency range etc. I will discuss in detail the recent implementation of an invisibility cloak in the microwave spectrum.

  4. The DDBD Method In The A-Seismic Design of Anchored Diaphragm Walls

    SciTech Connect

    Manuela, Cecconi; Vincenzo, Pane; Sara, Vecchietti

    2008-07-08

    The development of displacement based approaches for earthquake engineering design appears to be very useful and capable to provide improved reliability by directly comparing computed response and expected structural performance. In particular, the design procedure known as the Direct Displacement Based Design (DDBD) method, which has been developed in structural engineering over the past ten years in the attempt to mitigate some of the deficiencies in current force-based design methods, has been shown to be very effective and promising ([1], [2]). The first attempts of application of the procedure to geotechnical engineering and, in particular, earth retaining structures are discussed in [3], [4] and [5]. However in this field, the outcomes of the research need to be further investigated in many aspects. The paper focuses on the application of the DDBD method to anchored diaphragm walls. The results of the DDBD method are discussed in detail in the paper, and compared to those obtained from conventional pseudo-static analyses.

  5. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2016-09-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  6. The ROM Design with Half Grouping Compression Method for Chip Area and Power Consumption Reduction

    NASA Astrophysics Data System (ADS)

    Jung, Ki-Sang; Kim, Kang-Jik; Kim, Young-Eun; Chung, Jin-Gyun; Pyun, Ki-Hyun; Lee, Jong-Yeol; Jeong, Hang-Geun; Cho, Seong-Ik

    In memory design, the issue is smaller size and low power. Most power used in the ROM is consumed in line capacitance such as address lines, word lines, bit lines, and decoder. This paper presents ROM design of a novel HG (Half Grouping) compression method so as to reduce the parasitic capacitance of bit lines and the area of the row decoder for power consumption and chip area reduction. ROM design result of 512 point FFT block shows that the proposed method reduces 40.6% area, 42.12% power, and 37.82% transistor number respectively in comparison with the conventional method. The designed ROM with proposed method is implemented in a 0.35µm CMOS process. It consumes 5.8mW at 100MHz with a single 3.3V power supply.

  7. The Importance of Adhering to Details of the Total Design Method (TDM) for Mail Surveys.

    ERIC Educational Resources Information Center

    Dillman, Don A.; And Others

    1984-01-01

    The empirical effects of adherence of details of the Total Design Method (TDM) approach to the design of mail surveys is discussed, based on the implementation of a common survey in 11 different states. The results suggest that greater adherence results in higher response, especially in the later stages of the TDM. (BW)

  8. Paragogy and Flipped Assessment: Experience of Designing and Running a MOOC on Research Methods

    ERIC Educational Resources Information Center

    Lee, Yenn; Rofe, J. Simon

    2016-01-01

    This study draws on the authors' first-hand experience of designing, developing and delivering (3Ds) a massive open online course (MOOC) entitled "Understanding Research Methods" since 2014, largely but not exclusively for learners in the humanities and social sciences. The greatest challenge facing us was to design an assessment…

  9. The Influence of Cognitive Domain Content Levels and Gender on Designer Judgments Regarding Useful Instructional Methods

    ERIC Educational Resources Information Center

    Honebein, Peter C.; Honebein, Cass H.

    2014-01-01

    Instructional theory is intended to guide instructional designers in selecting the best instructional methods for a given situation. There have been numerous qualitative investigations into how instructional designers make decisions and the alignment of those decisions with theoretical influences. The purpose of this research is to more…

  10. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  11. Rationale, Design, and Methods of the Preschool ADHD Treatment Study (PATS)

    ERIC Educational Resources Information Center

    Kollins, Scott; Greenhill, Laurence; Swanson, James; Wigal, Sharon; Abikoff, Howard; McCracken, James; Riddle, Mark; McGough, James; Vitiello, Benedetto; Wigal, Tim; Skrobala, Anne; Posner, Kelly; Ghuman, Jaswinder; Davies, Mark; Cunningham, Charles; Bauzo, Audrey

    2006-01-01

    Objective: To describe the rationale and design of the Preschool ADHD Treatment Study (PATS). Method: PATS was a National Institutes of Mental Health-funded, multicenter, randomized, efficacy trial designed to evaluate the short-term (5 weeks) efficacy and long-term (40 weeks) safety of methylphenidate (MPH) in preschoolers with…

  12. Connecting Generations: Developing Co-Design Methods for Older Adults and Children

    ERIC Educational Resources Information Center

    Xie, Bo; Druin, Allison; Fails, Jerry; Massey, Sheri; Golub, Evan; Franckel, Sonia; Schneider, Kiki

    2012-01-01

    As new technologies emerge that can bring older adults together with children, little has been discussed by researchers concerning the design methods used to create these new technologies. Giving both children and older adults a voice in a shared design process comes with many challenges. This paper details an exploratory study focusing on…

  13. Curiosity and Pedagogy: A Mixed-Methods Study of Student Experiences in the Design Studio

    ERIC Educational Resources Information Center

    Smith, Korydon H.

    2010-01-01

    Curiosity is often considered the foundation of learning. There is, however, little understanding of how (or if) pedagogy in higher education affects student curiosity, especially in the studio setting of architecture, interior design, and landscape architecture. This study used mixed-methods to investigate curiosity among design students in the…

  14. An Empirical Comparison of Five Linear Equating Methods for the NEAT Design

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.

    2009-01-01

    In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…

  15. A Comparison of Diary Method Variations for Enlightening Form Generation in the Design Process

    ERIC Educational Resources Information Center

    Babapour, Maral; Rehammar, Bjorn; Rahe, Ulrike

    2012-01-01

    This paper presents two studies in which an empirical approach was taken to understand and explain form generation and decisions taken in the design process. In particular, the activities addressing aesthetic aspects when exteriorising form ideas in the design process have been the focus of the present study. Diary methods were the starting point…

  16. Treatment of Early-Onset Schizophrenia Spectrum Disorders (TEOSS): Rationale, Design, and Methods

    ERIC Educational Resources Information Center

    McClellan, Jon; Sikich, Linmarie; Findling, Robert L.; Frazier, Jean A.; Vitiello, Benedetto; Hlastala, Stefanie A.; Williams, Emily; Ambler, Denisse; Hunt-Harrison, Tyehimba; Maloney, Ann E.; Ritz, Louise; Anderson, Robert; Hamer, Robert M.; Lieberman, Jeffrey A.

    2007-01-01

    Objective: The Treatment of Early Onset Schizophrenia Spectrum Disorders Study is a publicly funded clinical trial designed to compare the therapeutic benefits, safety, and tolerability of risperidone, olanzapine, and molindone in youths with early-onset schizophrenia spectrum disorders. The rationale, design, and methods of the Treatment of Early…

  17. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... Hot Block Dilute Acid and Hydrogen Peroxide Filter Extraction'' In this method, total suspended... and nitric acid and two aliquots of hydrogen peroxide, for a total of two and a half hours extraction... Coupled Plasma Mass Spectrometry (ICP-MS) with Hot Block Dilute Acid and Hydrogen Peroxide...

  18. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    Progress in the direct-inverse wing design method in curvilinear coordinates has been made. This includes the remedying of a spanwise oscillation problem and the assessment of grid skewness, viscous interaction, and the initial airfoil section on the final design. It was found that, in response to the spanwise oscillation problem that designing at every other spanwise station produced the best results for the cases presented, a smoothly varying grid is especially needed for the accurate design at the wing tip, the boundary layer displacement thicknesses must be included in a successful wing design, the design of high and medium aspect ratio wings is possible with this code, and the final airfoil section designed is fairly independent of the initial section.

  19. Compact lens design for LED chip array using supporting surface method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohui; Chen, Chen

    2015-10-01

    As the low luminous flux of one single LED, LED chip array plays important effect on achieving high luminous flux in all kinds of applied field, such as automotive lighting, street lighting, sensing and imaging, etc. However, LED chip array is an extended source rather than a point source of conventional one single LED. Obviously, lens design for LED chip array will be reconsider and redesign to accommodate this difference. In recent years, as the development of illumination optics, some excellent optical design methods for extended source have been improved and suggested. When the design method for point source is adopt to design the LED chip array with high flux and high uniformity, the obtained Lens is so huge that the advantage of small LED chip is dissipated at this condition. The supporting surface method is effective and commonly used. However, it is not convergent when solving the refractor problem of designing point light source near field. Based on the property of Cartesian oval, a modified method is proposed and the convergence of the modified method is verified by Monte-Carlo ray trace. The number of the Cartesian oval and the size of the lens can be firmly under control during the design, while generally the ratio between the sizes of the lens and the chip is greater than 5. Based on the modified supporting surface method, a compact lens design method for extended light source is constructed. And the LED illumination lens is designed by this method and fabricated, and the simulation result shows that this LED illumination lens can achieve uniform illumination at target surface.

  20. Supercritical blade design on stream surfaces of revolution with an inverse method

    NASA Technical Reports Server (NTRS)

    Schmidt, E.; Grein, H.-D.

    1991-01-01

    A method to solve the inverse problem of supercritical blade-to-blade flow on stream surfaces of revolution with variable radius and variable stream surface thickness in a relative system is described. Some aspects of shockless design and of leading edge resolution in the numerical procedure are depicted. Some supercritical compressor cascades were designed and their complete flow field results were compared with computations of two different analysis methods.

  1. Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces

    NASA Technical Reports Server (NTRS)

    Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John

    2011-01-01

    Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.

  2. The value of value of information: best informing research design and prioritization using current methods.

    PubMed

    Eckermann, Simon; Karnon, Jon; Willan, Andrew R

    2010-01-01

    Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of

  3. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    SciTech Connect

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-12-01

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.

  4. Design in mind: eliciting service user and frontline staff perspectives on psychiatric ward design through participatory methods

    PubMed Central

    Csipke, Emese; Papoulias, Constantina; Vitoratou, Silia; Williams, Paul; Rose, Diana; Wykes, Til

    2016-01-01

    Abstract Background: Psychiatric ward design may make an important contribution to patient outcomes and well-being. However, research is hampered by an inability to assess its effects robustly. This paper reports on a study which deployed innovative methods to capture service user and staff perceptions of ward design. Method: User generated measures of the impact of ward design were developed and tested on four acute adult wards using participatory methodology. Additionally, inpatients took photographs to illustrate their experience of the space in two wards. Data were compared across wards. Results: Satisfactory reliability indices emerged based on both service user and staff responses. Black and minority ethnic (BME) service users and those with a psychosis spectrum diagnosis have more positive views of the ward layout and fixtures. Staff members have more positive views than service users, while priorities of staff and service users differ. Inpatient photographs prioritise hygiene, privacy and control and address symbolic aspects of the ward environment. Conclusions: Participatory and visual methodologies can provide robust tools for an evaluation of the impact of psychiatric ward design on users. PMID:26886239

  5. Validation of published Stirling engine design methods using engine characteristics from the literature

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1980-01-01

    Four fully disclosed reference engines and five design methods are discussed. So far, the agreement between theory and experiment is about as good for the simpler calculation methods as it is for the more complicated methods, that is, within 20%. For the simpler methods, a one number adjustable constant can be used to reduce the error in predicting power output and efficiency over the entire operating map to less than 10%.

  6. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  7. An application of performance goal based method for the design and evaluation of structures

    SciTech Connect

    Conrads, T.J.

    1996-10-15

    This paper describes an application of the U.S. Department of Energy`s (DOE) performance goal based method for the design and evaluation of structures, systems, and components (SSCS) at Fluor Daniel Hanford, Inc. (FDH). The philosophy on which DOE`s method is based has been employed to construct a graded approach to the minimum structural design and evaluation criteriz@ used at the DOE Hanford Site that complies with the DOE Order 54E;0.28, Natural Phenomena Hazards Mitigation. The FDH structural design and evaluation criteria applies to both nuclear and non-nuclear SSCs that are not covered by a reactor safety analysis report.

  8. A method of optimal design of single-sided linear induction motor for transit

    SciTech Connect

    Yoon, S.B.; Hur, J.; Hyun, D.S.

    1997-09-01

    An optimal design method for a single-sided linear induction motor (SLIM) for transit is described. The authors propose the method which determines the overall parameters of SLIM for transit using only the rated mechanical output. When the optimization is carried out, the slot depth is used as the initial value so that the exact slot depth is calculated iteratively from the circuit equation. The optimization problem of a SLIM design is approached by use of the sequential quadratic programming (SQP). The influence of design variables is analyzed by the rated thrust and the rated velocity respectively.

  9. The 1995 forum on appropriate criteria and methods for seismic design of nuclear piping

    SciTech Connect

    Slagis, G.C.

    1996-12-01

    A record of the 1995 Forum on Appropriate Criteria and Methods for Seismic Design of Nuclear Piping is provided. The focus of the forum was the earthquake experience data base and whether the data base demonstrates that seismic inertia loads will not cause failure in ductile piping systems. This was a follow-up to the 1994 Forum when the use of earthquake experience data, including the recent Northridge earthquake, to justify a design-by-rule method was explored. Two possible topics for the next forum were identified--inspection after an earthquake and design for safe-shutdown earthquake only.

  10. Design method for a distributed Bragg resonator based evanescent field sensor

    NASA Astrophysics Data System (ADS)

    Bischof, David; Kehl, Florian; Michler, Markus

    2016-12-01

    This paper presents an analytic design method for a distributed Bragg resonator based evanescent field sensor. Such sensors can, for example, be used to measure changing refractive indices of the cover medium of a waveguide, as well as molecule adsorption at the sensor surface. For given starting conditions, the presented design method allows the analytical calculation of optimized sensor parameters for quantitative simulation and fabrication. The design process is based on the Fabry-Pérot resonator and analytical solutions of coupled mode theory.

  11. A direct-inverse transonic wing-design method in curvilinear coordinates including viscous-interaction

    NASA Technical Reports Server (NTRS)

    Ratcliff, Robert R.; Carlson, Leland A.

    1989-01-01

    Progress in the direct-inverse wing design method in curvilinear coordinates has been made. A spanwise oscillation problem and proposed remedies are discussed. Test cases are presented which reveal the approximate limits on the wing's aspect ratio and leading edge wing sweep angle for a successful design, and which show the significance of spanwise grid skewness, grid refinement, viscous interaction, the initial airfoil section and Mach number-pressure distribution compatibility on the final design. Furthermore, preliminary results are shown which indicate that it is feasible to successfully design a region of the wing which begins aft of the leading edge and terminates prior to the trailing edge.

  12. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  13. Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.

    PubMed

    Yu, Hai; Janiga, Gábor; Thévenin, Dominique

    2016-04-01

    An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps. PMID:26526039

  14. Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.

    PubMed

    Yu, Hai; Janiga, Gábor; Thévenin, Dominique

    2016-04-01

    An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps.

  15. Application of direct inverse analogy method (DIVA) and viscous design optimization techniques

    NASA Technical Reports Server (NTRS)

    Greff, E.; Forbrich, D.; Schwarten, H.

    1991-01-01

    A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.

  16. Weighing health benefit and health risk information when consuming sport-caught fish.

    PubMed

    Knuth, Barbara A; A Connelly, Nancy; Sheeshka, Judy; Patterson, Jacqueline

    2003-12-01

    Fish consumers may incur benefits and risks from eating fish. Health advisories issued by states, tribes, and other entities typically include advice about how to limit fish consumption or change other behaviors (e.g., fish cleaning or cooking) to reduce health risks from exposure to contaminants. Eating fish, however, may provide health benefits. Risk communicators and fish consumers have suggested the importance of including risk comparison information, as well as health risk-benefit comparisons in health advisory communications. To improve understanding about how anglers fishing in waters affected by health advisories may respond to such risk-risk or risk-benefit information, we surveyed Lake Ontario (NY, USA) anglers. We interviewed by telephone 4,750 anglers, 2,593 of which had fished Lake Ontario in the past 12 months and were sent a detailed mail questionnaire (1,245 responded). We posed questions varying the magnitude of health risks and health benefits to be gained by fish consumption, and varied the population affected by these risks and benefits (anglers, children, women of childbearing age, and unborn children). Respondents were influenced by health benefit and health risk information. When risks were high, most respondents would eat less fish regardless of the benefit level. When risks were low, the magnitude of change in fish consumption was related to level of benefit. Responses differed depending on the question wording order, that is, whether "risks" were posed before "benefits." For a given risk-benefit level, respondents would give different advice to women of childbearing age versus children, with more conservative advice (eat less fish) provided to women of childbearing age. Respondents appeared to be influenced more strongly by risk-risk comparisons (e.g., risks from other foods vs. risks from fish) than by risk-benefit comparisons (e.g., risks from fish vs. benefits from fish). Risk analysts and risk communicators should improve efforts to

  17. Double freeform surfaces design for laser beam shaping with Monge-Ampère equation method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaqin; Wu, Rengmao; Liu, Peng; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-11-01

    This paper presents a method for designing double freeform surfaces to simultaneously control the intensity distribution and phase profile of the laser beam. Based on Snell’s law, the conservation law of energy and the constraint imposed on the optical path length between the input and output wavefronts, the double surfaces design is converted into an elliptic Monge-Ampère (MA) equation with a nonlinear boundary problem. A generalized approach is introduced to find the numerical solution of the design model. Two different layouts of the beam shaping system are introduced and detailed comparisons are also made between the two layouts. Design examples are given and the results indicate that good matching is achieved by the MA method with more than 98% of the energy efficiency. The MA method proposed in this paper provides a reasonably good means for laser beam shaping.

  18. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  19. Two-Step Design Method of Engine Control System Based on Generalized Predictive Control

    NASA Astrophysics Data System (ADS)

    Hashimoto, Seiji; Okuda, Hiroyuki; Okada, Yasushi; Adachi, Shuichi; Niwa, Shinji; Kajitani, Mitsunobu

    Conservation of the environment has become critical to the automotive industry. Recently, requirements for on-board diagnostic and engine control systems have been strictly enforced. In the present paper, in order to meet the requirements for a low-emissions vehicle, a novel construction method of the air-fuel ratio (A/F) control system is proposed. The construction method of the system is divided into two steps. The first step is to design the A/F control system for the engine based on an open loop design. The second step is to design the A/F control system for the catalyst system. The design method is based on the generalized predictive control in order to satisfy the robustness to open loop control as well as model uncertainty. The effectiveness of the proposed A/F control system is verified through experiments using full-scale products.

  20. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  1. Rock riprap design methods and their applicability to long-term protection of uranium mill tailings impoundments

    SciTech Connect

    Walters, W.H.

    1982-08-01

    This report reviews the more accepted or recommended riprap design methods currently used to design rock riprap protection against soil erosion by flowing water. The basic theories used to develop the various methods are presented. The Riprap Design with Safety Factors Method is identified as the logical choice for uranium mill tailings impoundments. This method is compared to the other methods and its applicability to the protection requirements of tailings impoundments is discussed. Other design problems are identified and investigative studies recommended.

  2. Hierarchical development of three direct-design methods for two-dimensional axial-turbomachinery cascades

    SciTech Connect

    Korakianitis, T. )

    1993-04-01

    The direct and inverse blade-design iterations for the selection of isolated airfoils and gas turbine blade cascades are enormously reduced if the initial blade shape has performance characteristics near the desirable ones. This paper presents the hierarchical development of three direct blade-design methods of increasing utility for generating two-dimensional blade shapes. The methods can be used to generate inputs to the direct- or inverse-blade-design sequences for subsonic or supersonic airfoils for compressors and turbines, or isolated airfoils. The first method specifies the airfoil shapes with analytical polynomials. It shows that continuous curvature and continuous slope of curvature are necessary conditions to minimize the possibility of flow separation, and to lead to improved blade designs. The second method specifies the airfoil shapes with parametric fourth-order polynomials, which result in continuous-slope-of-curvature airfoils, with smooth Mach number and pressure distributions. This method is time consuming. The third method specifies the airfoil shapes by using a mixture of analytical polynomials and mapping the airfoil surfaces from a desirable curvature distribution. The third method provides blade surfaces with desirable performance in very few direct-design iterations. In all methods the geometry near the leading edge is specified by a thickness distribution added to a construction line, which eliminates the leading edge overspeed and laminar-separation regions. The blade-design methods presented in this paper can be used to improve the aerodynamic and heat transfer performance of turbomachinery cascades, and they can result in high-performance airfoils in very few iterations.

  3. Case Study for Enhanced Accident Tolerance Design Changes

    SciTech Connect

    Prescott, Steven; Smith, Curtis; Koonce, Tony

    2014-06-01

    The ability to better characterize and quantify safety margin is important to improved decision making about Light Water Reactor (LWR) design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margin management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. In addition, as research and development in the LWR Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant systems, structures, and components (SSCs) needs and opportunities to better optimize plant safety and performance will become known. To support decision making related to economics, reliability, and safety, the Risk Informed Safety Margin Characterization (RISMC) Pathway provides methods and tools that enable mitigation options known as risk informed margins management (RIMM) strategies.

  4. The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying

    Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.

  5. Co-design of RAD and ETHICS methodologies: a combination of information system development methods

    NASA Astrophysics Data System (ADS)

    Nasehi, Arezo; Shahriyari, Salman

    2011-12-01

    Co-design is a new trend in the social world which tries to capture different ideas in order to use the most appropriate features for a system. In this paper, co-design of two information system methodologies is regarded; rapid application development (RAD) and effective technical and human implementation of computer-based systems (ETHICS). We tried to consider the characteristics of these methodologies to see the possibility of having a co-design or combination of them for developing an information system. To reach this purpose, four different aspects of them are analyzed: social or technical approach, user participation and user involvement, job satisfaction, and overcoming change resistance. Finally, a case study using the quantitative method is analyzed in order to examine the possibility of co-design using these factors. The paper concludes that RAD and ETHICS are appropriate to be co-designed and brings some suggestions for the co-design.

  6. Processor and method for developing a set of admissible fixture designs for a workpiece

    DOEpatents

    Brost, Randolph C.; Goldberg, Kenneth Y.; Canny, John; Wallack, Aaron S.

    1999-01-01

    Methods and apparatus are provided for developing a complete set of all admissible Type I and Type II fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.

  7. Processor and method for developing a set of admissible fixture designs for a workpiece

    DOEpatents

    Brost, Randolph C.; Goldberg, Kenneth Y.; Wallack, Aaron S.; Canny, John

    1996-01-01

    A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.

  8. Processor and method for developing a set of admissible fixture designs for a workpiece

    DOEpatents

    Brost, R.C.; Goldberg, K.Y.; Canny, J.; Wallack, A.S.

    1999-01-05

    Methods and apparatus are provided for developing a complete set of all admissible Type 1 and Type 2 fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 44 figs.

  9. Processor and method for developing a set of admissible fixture designs for a workpiece

    DOEpatents

    Brost, R.C.; Goldberg, K.Y.; Wallack, A.S.; Canny, J.

    1996-08-13

    A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 27 figs.

  10. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  11. Are standard wastewater treatment plant design methods suitable for any municipal wastewater?

    PubMed

    Insel, G; Güder, B; Güneş, G; Ubay Cokgor, E

    2012-01-01

    The design and operational parameters of an activated sludge system were analyzed treating the municipal wastewaters in Istanbul. The design methods of ATV131, Metcalf & Eddy together with model simulations were compared with actual plant operational data. The activated sludge model parameters were determined using 3-month dynamic data for the biological nutrient removal plant. The ATV131 method yielded closer sludge production, total oxygen requirement and effluent nitrogen levels to the real plant after adopting correct influent chemical oxygen demand (COD) fractionation. The enhanced biological phosphorus removal (EBPR) could not easily be predicted with ATV131 method due to low volatile fatty acids (VFA) potential.

  12. Design, installation and operational methods of implementing horizontal wells for in situ groundwater and soil remediation

    SciTech Connect

    Larson, R.B.

    1996-12-31

    The design and installation of horizontal wells is the primary factor in the efficiency of the remedial actions. Often, inadequacies in the design and installation of remediation systems are not identified until remedial actions have commenced, at which time, required modifications of operational methods can be costly. The parameters required for designing a horizontal well remediation system include spatial variations in contaminant concentrations and lithology, achievable injection and/or extraction rates, area of influence from injection and/or extraction processes, and limitations of installation methods. As with vertical wells, there are several different methods for the installation of horizontal wells. This paper will summarize four installation methods for horizontal wells, including four sites where horizontal wells have been utilized for in-situ groundwater and soil remediation.

  13. General boundary mapping method and its application in designing an arbitrarily shaped perfect electric conductor reshaper.

    PubMed

    Guan, Jianguo; Li, Wei; Wang, Wei; Fu, Zhengyi

    2011-09-26

    A general boundary mapping method is proposed to enable the designing of various transformation devices with arbitrary shapes by reducing the traditional space-to-space mapping to boundary-to-boundary mapping. The method also makes the designing of complex-shaped transformation devices more feasible and flexible. Using the boundary mapping method, an arbitrarily shaped perfect electric conductor (PEC) reshaping device, which is called a "PEC reshaper," is demonstrated to visually reshape a PEC with an arbitrary shape to another arbitrary one. Unlike the previously reported simple PEC reshaping devices, the arbitrarily shaped PEC reshaper designed here does not need to share a common domain. Moreover, the flexibilities of the boundary mapping method are expected to inspire some novel PEC reshapers with attractive new functionalities.

  14. The transfer function method for gear system dynamics applied to conventional and minimum excitation gearing designs

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1982-01-01

    A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.

  15. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    NASA Astrophysics Data System (ADS)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  16. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of

  17. A Proposal for the use of the Consortium Method in the Design-build system

    NASA Astrophysics Data System (ADS)

    Miyatake, Ichiro; Kudo, Masataka; Kawamata, Hiroyuki; Fueta, Toshiharu

    In view of the necessity for efficient implementation of public works projects, it is expected to utilize advanced technical skills of private firms, for the purpose of reducing project costs, improving performance and functions of construction objects, and reducing work periods, etc. The design-build system is a method to order design and construction as a single contract, including design of structural forms and main specifications of the construction object. This is a system in which high techniques of private firms can be utilized, as a means to ensure qualities of design and construction, rational design, and efficiency of the project. The objective of this study is to examine the use of a method to form a consortium of civil engineering consultants and construction companies, as it is an issue related to the implementation of the design-build method. Furthermore, by studying various forms of consortiums to be introduced in future, it proposes procedural items required to utilize this method, during the bid and after signing a contract, such as the estimate submission from the civil engineering consultants etc.

  18. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction. PMID:26660897

  19. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.

  20. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based

  1. Limitations of the method of characteristics when applied to axisymmetric hypersonic nozzle design

    NASA Technical Reports Server (NTRS)

    Edwards, Anne C.; Perkins, John N.; Benton, James R.

    1990-01-01

    A design study of axisymmetric hypersonic wind tunnel nozzles was initiated by NASA Langley Research Center with the objective of improving the flow quality of their ground test facilities. Nozzles for Mach 6 air, Mach 13.5 nitrogen, and Mach 17 nitrogen were designed using the Method of Characteristics/Boundary Layer (MOC/BL) approach and were analyzed with a Navier-Stokes solver. Results of the analysis agreed well with design for the Mach 6 case, but revealed oblique shock waves of increasing strength originating from near the inflection point of the Mach 13.5 and Mach 17 nozzles. The findings indicate that the MOC/BL design method has a fundamental limitation that occurs at some Mach number between 6 an 13.5. In order to define the limitation more exactly and attempt to discover the cause, a parametric study of hypersonic ideal air nozzles designed with the current MOC/BL method was done. Results of this study indicate that, while stagnations conditions have a moderate affect on the upper limit of the method, the method fails at Mach numbers above 8.0.

  2. Does incentivising pill-taking 'crowd out' risk-information processing? Evidence from a web-based experiment.

    PubMed

    Mantzari, Eleni; Vogt, Florian; Marteau, Theresa M

    2014-04-01

    The use of financial incentives for changing health-related behaviours raises concerns regarding their potential to undermine the processing of risks associated with incentivised behaviours. Uncertainty remains about the validity of such concerns. This web-based experiment assessed the impact of financial incentives on i) willingness to take a pill with side-effects; ii) the time spent viewing risk-information and iii) risk-information processing, assessed by perceived-risk of taking the pill and knowledge of its side-effects. It further assesses whether effects are moderated by limiting cognitive capacity. Two-hundred and seventy-five UK-based university staff and students were recruited online under the pretext of being screened for a fictitious drug-trial. Participants were randomised to the offer of different compensation levels for taking a fictitious pill (£0; £25; £1000) and the presence or absence of a cognitive load task (presentation of five digits for later recall). Willingness to take the pill increased with the offer of £1000 (84% vs. 67%; OR 3.66, CI 95% 1.27-10.6), but not with the offer of £25 (79% vs. 67%; OR 1.68, CI 95% 0.71-4.01). Risk-information processing was unaffected by the offer of incentives. The time spent viewing the risk-information was affected by the offer of incentives, an effect moderated by cognitive load: Without load, time increased with the value of incentives (£1000: M = 304.4sec vs. £0: M = 37.8sec, p < 0.001; £25: M = 66.6sec vs. £0: M = 37.8sec, p < 0.001). Under load, time decreased with the offer of incentives (£1000: M = 48.9sec vs. £0: M = 132.7sec, p < 0.001; £25: M = 60.9sec vs. £0: M = 132.7sec, p < 0.001), but did not differ between the two incentivised groups (p = 1.00). This study finds no evidence to suggest incentives "crowd out" risk-information processing. On the contrary, incentives appear to signal risk, an effect, however, which disappears under cognitive load. Although these findings require

  3. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  4. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    SciTech Connect

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-19

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 Multiplication-Sign 300 mm{sup 2}. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  5. A new method for the design optimization of three-phase induction motors

    SciTech Connect

    Daidone, A.; Parasiliti, F.; Villani, M.; Lucidi, S.

    1998-09-01

    The paper deals with the optimization problem of induction motors design. In particular a new global minimization algorithm is described; it tries to take into account all the features of these particular problems. A first numerical comparison between this new algorithm and a method widely used in the design optimization of induction motors has been performed. The obtained results show that the proposed approach is promising.

  6. Comprehensive Design Method for LOX/Liquid-Methane Regenerative Cooling Combustor with Coaxial Injector

    NASA Astrophysics Data System (ADS)

    Yatsuyanagi, Nobuyuki

    A comprehensive design method for a LOX/Liquid-Methane (L-CH4) rocket engine combustor with a coaxial injector and the preliminary design of the regenerative cooling combustor with 100-kN thrust in vacuum at a combustion pressure of a 3.43 MPa are presented. Reasonable dimensions for the combustor that satisfy the targeted C* efficiency of more than 98% and combustion stability are obtained.

  7. In silico methods to assist drug developers in acetylcholinesterase inhibitor design.

    PubMed

    Bermúdez-Lugo, J A; Rosales-Hernández, M C; Deeb, O; Trujillo-Ferrara, J; Correa-Basurto, J

    2011-01-01

    Alzheimer's disease (AD) is a neurodegenerative disease characterized by a low acetylcholine (ACh) concentration in the hippocampus and cortex. ACh is a neurotransmitter hydrolyzed by acetylcholinesterase (AChE). Therefore, it is not surprising that AChE inhibitors (AChEIs) have shown better results in the treatment of AD than any other strategy. To improve the effects of AD, many researchers have focused on designing and testing new AChEIs. One of the principal strategies has been the use of computational methods (structural bioinformatics or in silico methods). In this review, we summarize the in silico methods used to enhance the understanding of AChE, particularly at the binding site, to design new AChEIs. Several computational methods have been used, such as docking approaches, molecular dynamics studies, quantum mechanical studies, electronic properties, hindrance effects, partition coefficients (Log P) and molecular electrostatic potentials surfaces, among other physicochemical methods that exhibit quantitative structure-activity relationships.

  8. What can formal methods offer to digital flight control systems design

    NASA Technical Reports Server (NTRS)

    Good, Donald I.

    1990-01-01

    Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.

  9. Aerodynamic aircraft design methods and their notable applications: Survey of the activity in Japan

    NASA Technical Reports Server (NTRS)

    Fujii, Kozo; Takanashi, Susumu

    1991-01-01

    An overview of aerodynamic aircraft design methods and their recent applications in Japan is presented. A design code which was developed at the National Aerospace Laboratory (NAL) and is in use now is discussed, hence, most of the examples are the result of the collaborative work between heavy industry and the National Aerospace Laboratory. A wide variety of applications in transonic to supersonic flow regimes are presented. Although design of aircraft elements for external flows are the main focus, some of the internal flow applications are also presented. Recent applications of the design code, using the Navier Stokes and Euler equations in the analysis mode, include the design of HOPE (a space vehicle) and Upper Surface Blowing (USB) aircraft configurations.

  10. Performance-based plastic design method for steel concentric braced frames

    NASA Astrophysics Data System (ADS)

    Banihashemi, M. R.; Mirzagoltabar, A. R.; Tavakoli, H. R.

    2015-09-01

    This paper presents a performance-based plastic design (PBPD) methodology for the design of steel concentric braced frames. The design base shear is obtained based on energy-work balance equation using pre-selected target drift and yield mechanism. To achieve the intended yield mechanism and behavior, plastic design is applied to detail the frame members. For validity, three baseline frames (3, 6, 9-story) are designed according to AISC (Seismic Provisions for Structural Steel Buildings, American Institute of Steel Construction, Chicago, 2005) seismic provisions (baseline frames). Then, the frames are redesigned based on the PBPD method. These frames are subjected to extensive nonlinear dynamic time-history analyses. The results show that the PBPD frames meet all the intended performance objectives in terms of yield mechanisms and target drifts, whereas the baseline frames show very poor response due to premature brace fractures leading to unacceptably large drifts and instability.

  11. A new method for designing a compliant mechanism based displacement amplifier

    NASA Astrophysics Data System (ADS)

    Bharanidaran, R.; Aswin Srikanth, Sai

    2016-09-01

    Advancement of precision industries, displacement amplifying device is essential to produce precise and long range of motion for micro-actuator. Compliant mechanism based displacement amplifier (DA) is more appropriate to attain high precision motion. Compliant mechanism utilizes elastic nature of material to achieve required motion. In this research paper, compliant mechanism design is developed using topology optimization. The output of the topologically optimized design is impossible to fabricate as it is due to the presence of senseless regions. Hence, this optimized design is considered as a primary design of compliant mechanism which provides the configuration of kinematic linkages and also provides the details of the geometrical locations of the flexure hinges. Selection of appropriate geometrical parameters of the flexure hinges is another critical task in the design process and parameterization technique is used to determine flexure hinge parameters. Structural performance of mechanical amplifier confirmed using finite element method (FEM).

  12. On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2004-01-01

    Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.

  13. Efficient design of a truss beam by applying first order optimization method

    NASA Astrophysics Data System (ADS)

    Fedorik, Filip

    2013-10-01

    Applications of optimization procedures in structural designs are widely discussed problems, which are caused by currently still-increasing demands on structures. Using of optimization methods in efficient designs passes through great development, especially in duplicate production where even small savings might lead to considerable reduction of total costs. The presented paper deals with application and analysis of the First Order optimization technique, which is implemented in the Design Optimization module that uses the main features of multi-physical FEM program ANSYS, in steel truss-beam design. Constraints of the design are stated by EN 1993 Eurocode 3, for uniform compression forces in compression members and tensile resistance moments in tension members. Furthermore, a minimum frequency of the first natural modal shape of the structure is determined. The aim of the solution is minimizing the weight of the structure by changing members' cross-section properties.

  14. Differential assessment of designations of wetland status using two delineation methods.

    PubMed

    Wu, Meiyin; Kalma, Dennis; Treadwell-Steitz, Carol

    2014-07-01

    Two different methods are commonly used to delineate and characterize wetlands. The U.S. Army Corps of Engineers (ACOE) delineation method uses field observation of hydrology, soils, and vegetation. The U.S. Fish and Wildlife Service's National Wetland Inventory Program (NWI) relies on remote sensing and photointerpretation. This study compared designations of wetland status at selected study sites using both methods. Twenty wetlands from the Wetland Boundaries Map of the Ausable-Boquet River Basin (created using the revised NWI method) in the Ausable River watershed in Essex and Clinton Counties, NY, were selected for this study. Sampling sites within and beyond the NWI wetland boundaries were selected. During the summers of 2008 and 2009, wetland hydrology, soils, and vegetation were examined for wetland indicators following the methods described in the ACOE delineation manual. The study shows that the two methods agree at 78 % of the sampling sites and disagree at 22 % of the sites. Ninety percent of the sampling locations within the wetland boundaries on the NWI maps were categorized as ACOE wetlands with all three ACOE wetland indicators present. A binary linear logistic regression model analyzed the relationship between the designations of the two methods. The outcome of the model indicates that 83 % of the time, the two wetland designation methods agree. When discrepancies are found, it is the presence or absence of wetland hydrology and vegetation that causes the differences in delineation.

  15. Predictive Array Design. A method for sampling combinatorial chemistry library space.

    PubMed

    Lipkin, M J; Rose, V S; Wood, J

    2002-01-01

    A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.

  16. Development of a neutronics calculation method for designing commercial type Japanese sodium-cooled fast reactor

    SciTech Connect

    Takeda, T.; Shimazu, Y.; Hibi, K.; Fujimura, K.

    2012-07-01

    Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of this project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)

  17. Prospects of Applying Enhanced Semi-Empirical QM Methods for 2101 Virtual Drug Design.

    PubMed

    Yilmazer, Nusret Duygu; Korth, Martin

    2016-01-01

    The last five years have seen a renaissance of semiempirical quantum mechanical (SQM) methods in the field of virtual drug design, largely due to the increased accuracy of so-called enhanced SQM approaches. These methods make use of additional terms for treating dispersion (D) and hydrogen bond (H) interactions with an accuracy comparable to dispersion-corrected density functional theory (DFT-D). DFT-D in turn was shown to provide an accuracy comparable to the most sophisticated QM approaches when it comes to non-covalent intermolecular forces, which usually dominate the protein/ligand interactions that are central to virtual drug design. Enhanced SQM methods thus offer a very promising way to improve upon the current state of the art in the field of virtual drug design. PMID:27183985

  18. Design of Intelligent Hydraulic Excavator Control System Based on PID Method

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiao, Shengjie; Liao, Xiaoming; Yin, Penglong; Wang, Yulin; Si, Kuimao; Zhang, Yi; Gu, Hairong

    Most of the domestic designed hydraulic excavators adopt the constant power design method and set 85%~90% of engine power as the hydraulic system adoption power, it causes high energy loss due to mismatching of power between the engine and the pump. While the variation of the rotational speed of engine could sense the power shift of the load, it provides a new method to adjust the power matching between engine and pump through engine speed. Based on negative flux hydraulic system, an intelligent hydraulic excavator control system was designed based on rotational speed sensing method to improve energy efficiency. The control system was consisted of engine control module, pump power adjusted module, engine idle module and system fault diagnosis module. Special PLC with CAN bus was used to acquired the sensors and adjusts the pump absorption power according to load variation. Four energy saving control strategies with constant power method were employed to improve the fuel utilization. Three power modes (H, S and L mode) were designed to meet different working status; Auto idle function was employed to save energy through two work status detected pressure switches, 1300rpm was setting as the idle speed according to the engine consumption fuel curve. Transient overload function was designed for deep digging within short time without spending extra fuel. An increasing PID method was employed to realize power matching between engine and pump, the rotational speed's variation was taken as the PID algorithm's input; the current of proportional valve of variable displacement pump was the PID's output. The result indicated that the auto idle could decrease fuel consumption by 33.33% compared to work in maximum speed of H mode, the PID control method could take full use of maximum engine power at each power mode and keep the engine speed at stable range. Application of rotational speed sensing method provides a reliable method to improve the excavator's energy efficiency and

  19. Use of experimental data in testing methods for design against uncertainty

    NASA Astrophysics Data System (ADS)

    Rosca, Raluca Ioana

    Modern methods of design take into consideration the fact that uncertainty is present in everyday life, whether in the form of variable loads (the strongest wind that would affect a building), material properties of an alloy, or future demand for the product or cost of labor. Moreover, the Japanese example showed that it may be more cost-effective to design taking into account the existence of the uncertainty rather than to plan to eliminate or greatly reduce it. The dissertation starts by comparing the theoretical basis of two methods for design against uncertainty, namely probability theory and possibility theory. A two-variable design problem is then used to show the differences. It is concluded that for design problems with two or more cases of failure of very different magnitude (as the stop of a car due to lack of gas or motor failure), probability theory divides existent resources in a more intuitive way than possibility theory. The dissertation continues with the description of simple experiments (building towers of dominoes) and then it presents the methodology to increase the amount of information that can be drawn from a given data set. The methodology is shown on the Bidder-Challenger problem, a simulation of a problem of a company that makes microchips to set a target speed for its next microchip. The simulations use the domino experimental data. It is demonstrated that important insights into methods of probability and possibility based design can be gained from experiments.

  20. Study on the rotor design method for a small propeller-type wind turbine

    NASA Astrophysics Data System (ADS)

    Nishi, Yasuyuki; Yamashita, Yusuke; Inagaki, Terumi

    2016-08-01

    Small propeller-type wind turbines have a low Reynolds number, limiting the number of usable airfoil materials. Thus, their design method is not sufficiently established, and their performance is often low. The ultimate goal of this research is to establish high-performance design guidelines and design methods for small propeller-type wind turbines. To that end, we designed two rotors: Rotor A, based on the rotor optimum design method from the blade element momentum theory, and Rotor B, in which the chord length of the tip is extended and the chord length distribution is linearized. We examined performance characteristics and flow fields of the two rotors through wind tunnel experiments and numerical analysis. Our results revealed that the maximum output tip speed ratio of Rotor B shifted lower than that of Rotor A, but the maximum output coefficient increased by approximately 38.7%. Rotors A and B experienced a large-scale separation on the hub side, which extended to the mean in Rotor A. This difference in separation had an impact on the significant decrease in Rotor A's output compared to the design value and the increase in Rotor B's output compared to Rotor A.