An Integrated Magnetic Circuit Model and Finite Element Model Approach to Magnetic Bearing Design
NASA Technical Reports Server (NTRS)
Provenza, Andrew J.; Kenny, Andrew; Palazzolo, Alan B.
2003-01-01
A code for designing magnetic bearings is described. The code generates curves from magnetic circuit equations relating important bearing performance parameters. Bearing parameters selected from the curves by a designer to meet the requirements of a particular application are input directly by the code into a three-dimensional finite element analysis preprocessor. This means that a three-dimensional computer model of the bearing being developed is immediately available for viewing. The finite element model solution can be used to show areas of magnetic saturation and make more accurate predictions of the bearing load capacity, current stiffness, position stiffness, and inductance than the magnetic circuit equations did at the start of the design process. In summary, the code combines one-dimensional and three-dimensional modeling methods for designing magnetic bearings.
Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.
Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M
2002-10-01
The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.
Acceptance criteria for welds in ASTM A106 grade B steel pipe and plate
NASA Technical Reports Server (NTRS)
Hudson, C. M.; Wright, D. B., Jr.; Leis, B. N.
1986-01-01
Based on the RECERT Program findings, NASA-Langley funded a fatigue study of code-unacceptable welds. Usage curves were developed which were based on the structural integrity of the welds. The details of this study are presented in NASA CR-178114. The information presented is a condensation and reinterpretation of the information in NASA CR-178114. This condensation and reinterpretation generated usage curves for welds having: (1) indications 0.20 -inch deep by 0.40-inch long, and (2) indications 0.195-inch deep by 8.4-inches long. These curves were developed using the procedures used in formulating the design curves in Section VIII, Division 2 of the American Society of Mechanical Engineers Boiler and Pressure Vessel Code.
Grid Generation Techniques Utilizing the Volume Grid Manipulator
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1998-01-01
This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.
Effect of Light Water Reactor Water Environments on the Fatigue Life of Reactor Materials
Chopra, O. K.; Stevens, G. L.; Tregoning, R.; ...
2017-10-06
The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (Code) provides rules for the design of Class 1 components of nuclear power plants. Figures I-9.1 through I-9.6 of Appendix I to Section III of the Code specify fatigue design curves for applicable structural materials. However, the Code design curves do not explicitly address the effects of light water reactor (LWR) water environments. Existing fatigue strain-vs.-life (ε-N) laboratory data illustrate potentially significant effects of LWR water environments on the fatigue resistance of pressure vessel and piping steels. Extensive studies have been conducted at Argonne National Laboratory and elsewheremore » since 1990 to investigate the effects of LWR environments on the fatigue life of piping and pressure vessel steels. This article summarizes the results of these studies. Existing fatigue ε-N data were evaluated to identify the various material, environmental, and loading conditions that influence fatigue crack initiation; a methodology for estimating fatigue lives as a function of these parameters was developed. The effects were incorporated into the ASME Code Section III fatigue evaluations in terms of an environmental correction factor, F en, which is defined as the ratio of fatigue life in air at room temperature to the fatigue life in the LWR water environment at reactor operating temperatures. Available fatigue data were used to develop fatigue design curves for carbon and low-alloy steels, austenitic stainless steels, and nickel-chromium-iron (NiCr-Fe) alloys and their weld metals in air at room temperature. A review of the Code Section III fatigue adjustment factors of 2 on strain and 20 on life is also presented and the possible conservatism inherent in the choice of these adjustment factors is evaluated. A brief description of potential effects of neutron irradiation on fatigue crack initiation for these structural materials is also presented.« less
Effect of Light Water Reactor Water Environments on the Fatigue Life of Reactor Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chopra, O. K.; Stevens, G. L.; Tregoning, R.
The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (Code) provides rules for the design of Class 1 components of nuclear power plants. Figures I-9.1 through I-9.6 of Appendix I to Section III of the Code specify fatigue design curves for applicable structural materials. However, the Code design curves do not explicitly address the effects of light water reactor (LWR) water environments. Existing fatigue strain-vs.-life (ε-N) laboratory data illustrate potentially significant effects of LWR water environments on the fatigue resistance of pressure vessel and piping steels. Extensive studies have been conducted at Argonne National Laboratory and elsewheremore » since 1990 to investigate the effects of LWR environments on the fatigue life of piping and pressure vessel steels. This article summarizes the results of these studies. Existing fatigue ε-N data were evaluated to identify the various material, environmental, and loading conditions that influence fatigue crack initiation; a methodology for estimating fatigue lives as a function of these parameters was developed. The effects were incorporated into the ASME Code Section III fatigue evaluations in terms of an environmental correction factor, F en, which is defined as the ratio of fatigue life in air at room temperature to the fatigue life in the LWR water environment at reactor operating temperatures. Available fatigue data were used to develop fatigue design curves for carbon and low-alloy steels, austenitic stainless steels, and nickel-chromium-iron (NiCr-Fe) alloys and their weld metals in air at room temperature. A review of the Code Section III fatigue adjustment factors of 2 on strain and 20 on life is also presented and the possible conservatism inherent in the choice of these adjustment factors is evaluated. A brief description of potential effects of neutron irradiation on fatigue crack initiation for these structural materials is also presented.« less
Complexity of Curved Glass Structures
NASA Astrophysics Data System (ADS)
Kosić, T.; Svetel, I.; Cekić, Z.
2017-11-01
Despite the increasing number of research on the architectural structures of curvilinear forms and technological and practical improvement of the glass production observed over recent years, there is still a lack of comprehensive codes and standards, recommendations and experience data linked to real-life curved glass structures applications regarding design, manufacture, use, performance and economy. However, more and more complex buildings and structures with the large areas of glass envelope geometrically complex shape are built every year. The aim of the presented research is to collect data on the existing design philosophy on curved glass structure cases. The investigation includes a survey about how architects and engineers deal with different design aspects of curved glass structures with a special focus on the design and construction process, glass types and structural and fixing systems. The current paper gives a brief overview of the survey findings.
2009-03-01
52 Figure 4-1: Applied voltage versus deflection curve for Poly1/Poly2 stacked 300-μm single hot-arm actuator (shown on right...58 Figure 4-2: Applied voltage versus deflection curve for Poly1/Poly2 stacked 300-μm double hot-arm actuator (shown on...61 Figure 4-5: Deflection vs. power curves for an individual wedge from
Aerodynamic shape optimization of Airfoils in 2-D incompressible flow
NASA Astrophysics Data System (ADS)
Rangasamy, Srinivethan; Upadhyay, Harshal; Somasekaran, Sandeep; Raghunath, Sreekanth
2010-11-01
An optimization framework was developed for maximizing the region of 2-D airfoil immersed in laminar flow with enhanced aerodynamic performance. It uses genetic algorithm over a population of 125, across 1000 generations, to optimize the airfoil. On a stand-alone computer, a run takes about an hour to obtain a converged solution. The airfoil geometry was generated using two Bezier curves; one to represent the thickness and the other the camber of the airfoil. The airfoil profile was generated by adding and subtracting the thickness curve from the camber curve. The coefficient of lift and drag was computed using potential velocity distribution obtained from panel code, and boundary layer transition prediction code was used to predict the location of onset of transition. The objective function of a particular design is evaluated as the weighted-average of aerodynamic characteristics at various angles of attacks. Optimization was carried out for several objective functions and the airfoil designs obtained were analyzed.
Grison, Claire M; Robin, Sylvie; Aitken, David J
2015-11-21
The de novo design of a β/γ-peptidic foldamer motif has led to the discovery of an unprecedented 9/8-ribbon featuring an uninterrupted alternating C9/C8 hydrogen-bonding network. The ribbons adopt partially curved topologies determined synchronistically by the β-residue configuration and the γ-residue conformation sets.
NASA Astrophysics Data System (ADS)
Gorash, Yevgen; Comlekci, Tugrul; MacKenzie, Donald
2017-05-01
This study investigates the effects of fatigue material data and finite element types on accuracy of residual life assessments under high cycle fatigue. The bending of cross-beam connections is simulated in ANSYS Workbench for different combinations of structural member shapes made of a typical structural steel. The stress analysis of weldments with specific dimensions and loading applied is implemented using solid and shell elements. The stress results are transferred to the fatigue code nCode DesignLife for the residual life prediction. Considering the effects of mean stress using FKM approach, bending and thickness according to BS 7608:2014, fatigue life is predicted using the Volvo method and stress integration rules from ASME Boiler & Pressure Vessel Code. Three different pairs of S-N curves are considered in this work including generic seam weld curves and curves for the equivalent Japanese steel JIS G3106-SM490B. The S-N curve parameters for the steel are identified using the experimental data available from NIMS fatigue data sheets employing least square method and considering thickness and mean stress corrections. The numerical predictions are compared to the available experimental results indicating the most preferable fatigue data input, range of applicability and FE-model formulation to achieve the best accuracy.
Software Measurement Guidebook. Version 02.00.02
1992-12-01
Compatibility Testing Process .............................. 9-5 Figure 9-3. Development Effort Planning Curve ................................. 9-7 Figure 10-1...requirements, design, code, and test and for analyzing this data. "* Proposal Manager. The person responsible for describing and supporting the estimated...designed, build/elease ranges, variances, and comparisons size growth; costs; completions; and content, units completing test , units with historical
NASA Technical Reports Server (NTRS)
Demoss, J. F. (Compiler)
1971-01-01
Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
FY17 Status Report on the Initial EPP Finite Element Analysis of Grade 91 Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Sham, T. -L.
This report describes a modification to the elastic-perfectly plastic (EPP) strain limits design method to account for cyclic softening in Gr. 91 steel. The report demonstrates that the unmodified EPP strain limits method described in current ASME code case is not conservative for materials with substantial cyclic softening behavior like Gr. 91 steel. However, the EPP strain limits method can be modified to be conservative for softening materials by using softened isochronous stress-strain curves in place of the standard curves developed from unsoftened creep experiments. The report provides softened curves derived from inelastic material simulations and factors describing the transformationmore » of unsoftened curves to a softened state. Furthermore, the report outlines a method for deriving these factors directly from creep/fatigue tests. If the material softening saturates the proposed EPP strain limits method can be further simplified, providing a methodology based on temperature-dependent softening factors that could be implemented in an ASME code case allowing the use of the EPP strain limits method with Gr. 91. Finally, the report demonstrates the conservatism of the modified method when applied to inelastic simulation results and two bar experiments.« less
New method to design stellarator coils without the winding surface
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-01-01
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.
Monte Carlo simulation of ò ó coincidence system using plastic scintillators in 4àgeometry
NASA Astrophysics Data System (ADS)
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
Design for progressive fracture in composite shell structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Murthy, Pappu L. N.
1992-01-01
The load carrying capability and structural behavior of composite shell structures and stiffened curved panels are investigated to provide accurate early design loads. An integrated computer code is utilized for the computational simulation of composite structural degradation under practical loading for realistic design. Damage initiation, growth, accumulation, and propagation to structural fracture are included in the simulation. Progressive fracture investigations providing design insight for several classes of composite shells are presented. Results demonstrate the significance of local defects, interfacial regions, and stress concentrations on the structural durability of composite shells.
NASA Astrophysics Data System (ADS)
Couvreur, A.
2009-05-01
The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.
New method to design stellarator coils without the winding surface
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2017-11-06
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less
New method to design stellarator coils without the winding surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less
Development of Yield and Tensile Strength Design Curves for Alloy 617
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nancy Lybeck; T. -L. Sham
2013-10-01
The U.S. Department of Energy Very High Temperature Reactor Program is acquiring data in preparation for developing an Alloy 617 Code Case for inclusion in the nuclear section of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code. A draft code case was previously developed, but effort was suspended before acceptance by ASME. As part of the draft code case effort, a database was compiled of yield and tensile strength data from tests performed in air. Yield strength and tensile strength at temperature are used to set time independent allowable stress for construction materials in B&PVmore » Code, Section III, Subsection NH. The yield and tensile strength data used for the draft code case has been augmented with additional data generated by Idaho National Laboratory and Oak Ridge National Laboratory in the U.S. and CEA in France. The standard ASME Section II procedure for generating yield and tensile strength at temperature is presented, along with alternate methods that accommodate the change in temperature trends seen at high temperatures, resulting in a more consistent design margin over the temperature range of interest.« less
Light curves for bump Cepheids computed with a dynamically zoned pulsation code
NASA Technical Reports Server (NTRS)
Adams, T. F.; Castor, J. I.; Davis, C. G.
1980-01-01
The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.
Sensitivity study of the monogroove with screen heat pipe design
NASA Technical Reports Server (NTRS)
Evans, Austin L.; Joyce, Martin
1988-01-01
The present sensitivity study of design variable effects on the performance of a monogroove-with-screen heat pipe obtains performance curves for maximum heat-transfer rates vs. operating temperatures by means of a computer code; performance projections for both 1-g and zero-g conditions are obtainable. The variables in question were liquid and vapor channel design, wall groove design, and the number of feed lines in the evaporator and condenser. The effect on performance of three different working fluids, namely ammonia, methanol, and water, were also determined. Greatest sensitivity was to changes in liquid and vapor channel diameters.
Supernova Light Curves and Spectra from Two Different Codes: Supernu and Phoenix
NASA Astrophysics Data System (ADS)
Van Rossum, Daniel R; Wollaeger, Ryan T
2014-08-01
The observed similarities between light curve shapes from Type Ia supernovae, and in particular the correlation of light curve shape and brightness, have been actively studied for more than two decades. In recent years, hydronamic simulations of white dwarf explosions have advanced greatly, and multiple mechanisms that could potentially produce Type Ia supernovae have been explored in detail. The question which of the proposed mechanisms is (or are) possibly realized in nature remains challenging to answer, but detailed synthetic light curves and spectra from explosion simulations are very helpful and important guidelines towards answering this question.We present results from a newly developed radiation transport code, Supernu. Supernu solves the supernova radiation transfer problem uses a novel technique based on a hybrid between Implicit Monte Carlo and Discrete Diffusion Monte Carlo. This technique enhances the efficiency with respect to traditional implicit monte carlo codes and thus lends itself perfectly for multi-dimensional simulations. We show direct comparisons of light curves and spectra from Type Ia simulations with Supernu versus the legacy Phoenix code.
The Los Alamos Supernova Light Curve Project: Current Projects and Future Directions
NASA Astrophysics Data System (ADS)
Wiggins, Brandon Kerry; Los Alamos Supernovae Research Group
2015-01-01
The Los Alamos Supernova Light Curve Project models supernovae in the ancient and modern universe to determine the luminosities of observability of certain supernovae events and to explore the physics of supernovae in the local universe. The project utilizes RAGE, Los Alamos' radiation hydrodynamics code to evolve the explosions of progenitors prepared in well-established stellar evolution codes. RAGE allows us to capture events such as shock breakout and collisions of ejecta with shells of material which cannot be modeled well in other codes. RAGE's dumps are then ported to LANL's SPECTRUM code which uses LANL's OPLIB opacities database to calculate light curves and spectra. In this paper, we summarize our recent work in modeling supernovae.
Athermalization of infrared dual field optical system based on wavefront coding
NASA Astrophysics Data System (ADS)
Jiang, Kai; Jiang, Bo; Liu, Kai; Yan, Peipei; Duan, Jing; Shan, Qiu-sha
2017-02-01
Wavefront coding is a technology which combination of the optical design and digital image processing. By inserting a phase mask closed to the pupil plane of the optical system the wavefront of the system is re-modulated. And the depth of focus is extended consequently. In reality the idea is same as the athermalization theory of infrared optical system. In this paper, an uncooled infrared dual field optical system with effective focal as 38mm/19mm, F number as 1.2 of both focal length, operating wavelength varying from 8μm to 12μm was designed. A cubic phase mask was used at the pupil plane to re-modulate the wavefront. Then the performance of the infrared system was simulated with CODEV as the environment temperature varying from -40° to 60°. MTF curve of the optical system with phase mask are compared with the outcome before using phase mask. The result show that wavefront coding technology can make the system not sensitive to thermal defocus, and then realize the athermal design of the infrared optical system.
NASA Astrophysics Data System (ADS)
Jiménez-Varona, J.; Ponsin Roca, J.
2015-06-01
Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.
Optimal design of composite hip implants using NASA technology
NASA Technical Reports Server (NTRS)
Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.
1993-01-01
Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.
TOOKUIL: A case study in user interface development for safety code application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, D.L.; Harkins, C.K.; Hoole, J.G.
1997-07-01
Traditionally, there has been a very high learning curve associated with using nuclear power plant (NPP) analysis codes. Even for seasoned plant analysts and engineers, the process of building or modifying an input model for present day NPP analysis codes is tedious, error prone, and time consuming. Current cost constraints and performance demands place an additional burden on today`s safety analysis community. Advances in graphical user interface (GUI) technology have been applied to obtain significant productivity and quality assurance improvements for the Transient Reactor Analysis Code (TRAC) input model development. KAPL Inc. has developed an X Windows-based graphical user interfacemore » named TOOKUIL which supports the design and analysis process, acting as a preprocessor, runtime editor, help system, and post processor for TRAC. This paper summarizes the objectives of the project, the GUI development process and experiences, and the resulting end product, TOOKUIL.« less
NASA Technical Reports Server (NTRS)
Evans, Austin Lewis
1987-01-01
A computer code to model the steady-state performance of a monogroove heat pipe for the NASA Space Station is presented, including the effects on heat pipe performance of a screen in the evaporator section which deals with transient surges in the heat input. Errors in a previous code have been corrected, and the new code adds additional loss terms in order to model several different working fluids. Good agreement with existing performance curves is obtained. From a preliminary evaluation of several of the radiator design parameters it is found that an optimum fin width could be achieved but that structural considerations limit the thickness of the fin to a value above optimum.
Recognition of Protein-coding Genes Based on Z-curve Algorithms
-Biao Guo, Feng; Lin, Yan; -Ling Chen, Ling
2014-01-01
Recognition of protein-coding genes, a classical bioinformatics issue, is an absolutely needed step for annotating newly sequenced genomes. The Z-curve algorithm, as one of the most effective methods on this issue, has been successfully applied in annotating or re-annotating many genomes, including those of bacteria, archaea and viruses. Two Z-curve based ab initio gene-finding programs have been developed: ZCURVE (for bacteria and archaea) and ZCURVE_V (for viruses and phages). ZCURVE_C (for 57 bacteria) and Zfisher (for any bacterium) are web servers for re-annotation of bacterial and archaeal genomes. The above four tools can be used for genome annotation or re-annotation, either independently or combined with the other gene-finding programs. In addition to recognizing protein-coding genes and exons, Z-curve algorithms are also effective in recognizing promoters and translation start sites. Here, we summarize the applications of Z-curve algorithms in gene finding and genome annotation. PMID:24822027
Evaluation of Aeroelastically Tailored Small Wind Turbine Blades Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Dayton A.
2005-09-29
Evaluation of Aeroelastically Tailored Small Wind Turbine Blades Final Report Global Energy Concepts, LLC (GEC) has performed a conceptual design study concerning aeroelastic tailoring of small wind turbine blades. The primary objectives were to evaluate ways that blade/rotor geometry could be used to enable cost-of-energy reductions by enhancing energy capture while constraining or mitigating blade costs, system loads, and related component costs. This work builds on insights developed in ongoing adaptive-blade programs but with a focus on application to small turbine systems with isotropic blade material properties and with combined blade sweep and pre-bending/pre-curving to achieve the desired twist coupling.more » Specific goals of this project are to: (A) Evaluate and quantify the extent to which rotor geometry can be used to realize load-mitigating small wind turbine rotors. Primary aspects of the load mitigation are: (1) Improved overspeed safety affected by blades twisting toward stall in response to speed increases. (2) Reduced fatigue loading affected by blade twisting toward feather in response to turbulent gusts. (B) Illustrate trade-offs and design sensitivities for this concept. (C) Provide the technical basis for small wind turbine manufacturers to evaluate this concept and commercialize if the technology appears favorable. The SolidWorks code was used to rapidly develop solid models of blade with varying shapes and material properties. Finite element analyses (FEA) were performed using the COSMOS code modeling with tip-loads and centripetal accelerations. This tool set was used to investigate the potential for aeroelastic tailoring with combined planform sweep and pre-curve. An extensive matrix of design variables was investigated, including aerodynamic design, magnitude and shape of planform sweep, magnitude and shape of blade pre-curve, material stiffness, and rotor diameter. The FEA simulations resulted in substantial insights into the structural response of these blades. The trends were used to identify geometries and rotor configurations that showed the greatest promise for achieving beneficial aeroelastic response. The ADAMS code was used to perform complete aeroelastic simulations of selected rotor configurations; however, the results of these simulations were not satisfactory. This report documents the challenges encountered with the ADAMS simulations and presents recommendations for further development of this concept for aeroelastically tailored small wind turbine blades.« less
NASA Technical Reports Server (NTRS)
McGowan, David M.; Anderson, Melvin S.
1998-01-01
The analytical formulation of curved-plate non-linear equilibrium equations that include transverse-shear-deformation effects is presented. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Using several simplifying assumptions, linearized, stability equations are derived that describe the response of the plate just after bifurcation buckling occurs. These equations are then modified to allow the plate reference surface to be located a distance z(c), from the centroid surface which is convenient for modeling stiffened-plate assemblies. The implementation of the new theory into the VICONOPT buckling and vibration analysis and optimum design program code is described. Either classical plate theory (CPT) or first-order shear-deformation plate theory (SDPT) may be selected in VICONOPT. Comparisons of numerical results for several example problems with different loading states are made. Results from the new curved-plate analysis compare well with closed-form solution results and with results from known example problems in the literature. Finally, a design-optimization study of two different cylindrical shells subject to uniform axial compression is presented.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
SPIDERMAN: Fast code to simulate secondary transits and phase curves
NASA Astrophysics Data System (ADS)
Louden, Tom; Kreidberg, Laura
2017-11-01
SPIDERMAN calculates exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. The code uses a geometrical algorithm to solve exactly the area of sections of the disc of the planet that are occulted by the star. Approximately 1000 models can be generated per second in typical use, which makes making Markov Chain Monte Carlo analyses practicable. The code is modular and allows comparison of the effect of multiple different brightness distributions for a dataset.
NASA Technical Reports Server (NTRS)
Hairr, John W.; Huang, Jui-Ten; Ingram, J. Edward; Shah, Bharat M.
1992-01-01
The ISPAN Program (Interactive Stiffened Panel Analysis) is an interactive design tool that is intended to provide a means of performing simple and self contained preliminary analysis of aircraft primary structures made of composite materials. The program combines a series of modules with the finite element code DIAL as its backbone. Four ISPAN Modules were developed and are documented. These include: (1) flat stiffened panel; (2) curved stiffened panel; (3) flat tubular panel; and (4) curved geodesic panel. Users are instructed to input geometric and material properties, load information and types of analysis (linear, bifurcation buckling, or post-buckling) interactively. The program utilizing this information will generate finite element mesh and perform analysis. The output in the form of summary tables of stress or margins of safety, contour plots of loads or stress, and deflected shape plots may be generalized and used to evaluate specific design.
Optimization of 3D Field Design
NASA Astrophysics Data System (ADS)
Logan, Nikolas; Zhu, Caoxiang
2017-10-01
Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.
Composite blade structural analyzer (COBSTRAN) user's manual
NASA Technical Reports Server (NTRS)
Aiello, Robert A.
1989-01-01
The installation and use of a computer code, COBSTRAN (COmposite Blade STRuctrual ANalyzer), developed for the design and analysis of composite turbofan and turboprop blades and also for composite wind turbine blades was described. This code combines composite mechanics and laminate theory with an internal data base of fiber and matrix properties. Inputs to the code are constituent fiber and matrix material properties, factors reflecting the fabrication process, composite geometry and blade geometry. COBSTRAN performs the micromechanics, macromechanics and laminate analyses of these fiber composites. COBSTRAN generates a NASTRAN model with equivalent anisotropic homogeneous material properties. Stress output from NASTRAN is used to calculate individual ply stresses, strains, interply stresses, thru-the-thickness stresses and failure margins. Curved panel structures may be modeled providing the curvature of a cross-section is defined by a single value function. COBSTRAN is written in FORTRAN 77.
NASA-VOF3D: A three-dimensional computer program for incompressible flows with free surfaces
NASA Astrophysics Data System (ADS)
Torrey, M. D.; Mjolsness, R. C.; Stein, L. R.
1987-07-01
Presented is the NASA-VOF3D three-dimensional, transient, free-surface hydrodynamics program. This three-dimensional extension of NASA-VOF2D will, in principle, permit treatment in full three-dimensional generality of the wide variety of applications that could be treated by NASA-VOF2D only within the two-dimensional idealization. In particular, it, like NASA-VOF2D, is specifically designed to calculate confined flows in a low g environment. The code is presently restricted to cylindrical geometry. The code is based on the fractional volume-of-fluid method and allows multiple free surfaces with surface tension and wall adhesion. It also has a partial cell treatment that allows curved boundaries and internal obstacles. This report provides a brief discussion of the numerical method, a code listing, and some sample problems.
Reliability Analysis of a Green Roof Under Different Storm Scenarios
NASA Astrophysics Data System (ADS)
William, R. K.; Stillwell, A. S.
2015-12-01
Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.
Benchmarking of Computational Models for NDE and SHM of Composites
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna
2016-01-01
Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.
NASA Astrophysics Data System (ADS)
Ivanov, A. S.; Rusinkevich, A. A.; Taran, M. D.
2018-01-01
The FP Kinetics computer code [1] designed for calculation of fission products release from HTGR coated fuel particles was modified to allow consideration of chemical bonding, effects of limited solubility and component concentration jumps at interfaces between coating layers. Curves of Cs release from coated particles calculated with the FP Kinetics and PARFUME [2] codes were compared. It has been found that the consideration of concentration jumps at silicon carbide layer interfaces allows giving an explanation of some experimental data on Cs release obtained from post-irradiation heating tests. The need to perform experiments for measurement of solubility limits in coating materials was noted.
Open ISEmeter: An open hardware high-impedance interface for potentiometric detection.
Salvador, C; Mesa, M S; Durán, E; Alvarez, J L; Carbajo, J; Mozo, J D
2016-05-01
In this work, a new open hardware interface based on Arduino to read electromotive force (emf) from potentiometric detectors is presented. The interface has been fully designed with the open code philosophy and all documentation will be accessible on web. The paper describes a comprehensive project including the electronic design, the firmware loaded on Arduino, and the Java-coded graphical user interface to load data in a computer (PC or Mac) for processing. The prototype was tested by measuring the calibration curve of a detector. As detection element, an active poly(vinyl chloride)-based membrane was used, doped with cetyltrimethylammonium dodecylsulphate (CTA(+)-DS(-)). The experimental measures of emf indicate Nernstian behaviour with the CTA(+) content of test solutions, as it was described in the literature, proving the validity of the developed prototype. A comparative analysis of performance was made by using the same chemical detector but changing the measurement instrumentation.
The numerical design of a spherical baroclinic experiment for Spacelab flights
NASA Technical Reports Server (NTRS)
Fowlis, W. W.; Roberts, G. O.
1982-01-01
The near-zero G environment of Spacelab is the basis of a true spherical experimental model of synoptic scale baroclinic atmospheric processes, using a radial dielectric body force analogous to gravity over a volume of liquid within two concentric spheres. The baroclinic motions are generated by corotating the spheres and imposing thermal boundary conditions, such that the liquid is subjected to a stable radial gradient and a latitudinal gradient. Owing to mathematical difficulties associated with the spherical geometry, quantitative design criteria can be acquired only by means of numerical models. The procedure adopted required the development of two computer codes based on the Navier-Stokes equations. The codes, of which the first calculates axisymmetric steady flow solutions and the second determines the growth or decay rates of linear wave perturbations with different wave numbers, are combined to generate marginal stability curves.
Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine
NASA Technical Reports Server (NTRS)
Jiang, Nan; Simon, Terrence W.
2006-01-01
The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERM code GUI, as well as providing training applications.
SPIDERMAN: an open-source code to model phase curves and secondary eclipses
NASA Astrophysics Data System (ADS)
Louden, Tom; Kreidberg, Laura
2018-06-01
We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.
Maximized exoEarth candidate yields for starshades
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.
2016-10-01
The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.
A PC-based inverse design method for radial and mixed flow turbomachinery
NASA Technical Reports Server (NTRS)
Skoe, Ivar Helge
1991-01-01
An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karalidi, Theodora; Apai, Dániel; Schneider, Glenn
Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of amore » giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.« less
PyTranSpot: A tool for multiband light curve modeling of planetary transits and stellar spots
NASA Astrophysics Data System (ADS)
Juvan, Ines G.; Lendl, M.; Cubillos, P. E.; Fossati, L.; Tregloan-Reed, J.; Lammer, H.; Guenther, E. W.; Hanslmeier, A.
2018-02-01
Several studies have shown that stellar activity features, such as occulted and non-occulted starspots, can affect the measurement of transit parameters biasing studies of transit timing variations and transmission spectra. We present PyTranSpot, which we designed to model multiband transit light curves showing starspot anomalies, inferring both transit and spot parameters. The code follows a pixellation approach to model the star with its corresponding limb darkening, spots, and transiting planet on a two dimensional Cartesian coordinate grid. We combine PyTranSpot with a Markov chain Monte Carlo framework to study and derive exoplanet transmission spectra, which provides statistically robust values for the physical properties and uncertainties of a transiting star-planet system. We validate PyTranSpot's performance by analyzing eleven synthetic light curves of four different star-planet systems and 20 transit light curves of the well-studied WASP-41b system. We also investigate the impact of starspots on transit parameters and derive wavelength dependent transit depth values for WASP-41b covering a range of 6200-9200 Å, indicating a flat transmission spectrum.
Modelling the phase curve and occultation of WASP-43b with SPIDERMAN
NASA Astrophysics Data System (ADS)
Louden, Tom
2017-06-01
Presenting SPIDERMAN, a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary two dimensional surface brightness distributions. SPIDERMAN uses an exact geometric algorithm to calculate the area of sub-regions of the planet that are occulted by the star, with no loss in numerical precision. The speed of this calculation makes it possible to run MCMCs to marginalise effectively over the underlying parameters controlling the brightness distribution of exoplanets. The code is fully open source and available over Github. We apply the code to the phase curve of WASP-43b using an analytical surface brightness distribution, and find an excellent fit to the data. We are able to place direct constraints on the physics of heat transport in the atmosphere, such as the ratio between advective and radiative timescales at different altitudes.
NASA Astrophysics Data System (ADS)
Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon
2015-12-01
Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.
Bolometric Light Curves of Peculiar Type II-P Supernovae
NASA Astrophysics Data System (ADS)
Lusk, Jeremy A.; Baron, E.
2017-04-01
We examine the bolometric light curves of five Type II-P supernovae (SNe 1998A, 2000cb, 2006V, 2006au, and 2009E), which are thought to originate from blue supergiant progenitors like that of SN 1987A, using a new python package named SuperBoL. With this code, we calculate SNe light curves using three different common techniques common from the literature: the quasi-bolometric method, which integrates the observed photometry, the direct integration method, which additionally corrects for unobserved flux in the UV and IR, and the bolometric correction method, which uses correlations between observed colors and V-band bolometric corrections. We present here the light curves calculated by SuperBoL, along with previously published light curves, as well as peak luminosities and 56Ni yields. We find that the direct integration and bolometric correction light curves largely agree with previously published light curves, but with what we believe to be more robust error calculations, with 0.2≲ δ {L}{bol}/{L}{bol}≲ 0.5. Peak luminosities and 56Ni masses are similarly comparable to previous work. SN 2000cb remains an unusual member of this sub-group, owing to the faster rise and flatter plateau than the other supernovae in the sample. Initial comparisons with the NLTE atmosphere code PHOENIX show that the direct integration technique reproduces the luminosity of a model supernova spectrum to ˜5% when given synthetic photometry of the spectrum as input. Our code is publicly available. The ability to produce bolometric light curves from observed sets of broadband light curves should be helpful in the interpretation of other types of supernovae, particularly those that are not well characterized, such as extremely luminous supernovae and faint fast objects.
Fayaz, Shima; Fard-Esfahani, Pezhman; Fard-Esfahani, Armaghan; Mostafavi, Ehsan; Meshkani, Reza; Mirmiranpour, Hossein; Khaghani, Shahnaz
2012-01-01
Homologous recombination (HR) is the major pathway for repairing double strand breaks (DSBs) in eukaryotes and XRCC2 is an essential component of the HR repair machinery. To evaluate the potential role of mutations in gene repair by HR in individuals susceptible to differentiated thyroid carcinoma (DTC) we used high resolution melting (HRM) analysis, a recently introduced method for detecting mutations, to examine the entire XRCC2 coding region in an Iranian population. HRM analysis was used to screen for mutations in three XRCC2 coding regions in 50 patients and 50 controls. There was no variation in the HRM curves obtained from the analysis of exons 1 and 2 in the case and control groups. In exon 3, an Arg188His polymorphism (rs3218536) was detected as a new melting curve group (OR: 1.46; 95%CI: 0.432–4.969; p = 0.38) compared with the normal melting curve. We also found a new Ser150Arg polymorphism in exon 3 of the control group. These findings suggest that genetic variations in the XRCC2 coding region have no potential effects on susceptibility to DTC. However, further studies with larger populations are required to confirm this conclusion. PMID:22481871
SDM - A geodetic inversion code incorporating with layered crust structure and curved fault geometry
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Diao, Faqi; Hoechner, Andreas
2013-04-01
Currently, inversion of geodetic data for earthquake fault ruptures is most based on a uniform half-space earth model because of its closed-form Green's functions. However, the layered structure of the crust can significantly affect the inversion results. The other effect, which is often neglected, is related to the curved fault geometry. Especially, fault planes of most mega thrust earthquakes vary their dip angle with depth from a few to several tens of degrees. Also the strike directions of many large earthquakes are variable. For simplicity, such curved fault geometry is usually approximated to several connected rectangular segments, leading to an artificial loss of the slip resolution and data fit. In this presentation, we introduce a free FORTRAN code incorporating with the layered crust structure and curved fault geometry in a user-friendly way. The name SDM stands for Steepest Descent Method, an iterative algorithm used for the constrained least-squares optimization. The new code can be used for joint inversion of different datasets, which may include systematic offsets, as most geodetic data are obtained from relative measurements. These offsets are treated as unknowns to be determined simultaneously with the slip unknowns. In addition, a-priori and physical constraints are considered. The a-priori constraint includes the upper limit of the slip amplitude and the variation range of the slip direction (rake angle) defined by the user. The physical constraint is needed to obtain a smooth slip model, which is realized through a smoothing term to be minimized with the misfit to data. In difference to most previous inversion codes, the smoothing can be optionally applied to slip or stress-drop. The code works with an input file, a well-documented example of which is provided with the source code. Application examples are demonstrated.
Spotted star mapping by light curve inversion: Tests and application to HD 12545
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2013-06-01
A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.
A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).
Adaptive zero-tree structure for curved wavelet image coding
NASA Astrophysics Data System (ADS)
Zhang, Liang; Wang, Demin; Vincent, André
2006-02-01
We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.
Modeling of short fiber reinforced injection moulded composite
NASA Astrophysics Data System (ADS)
Kulkarni, A.; Aswini, N.; Dandekar, C. R.; Makhe, S.
2012-09-01
A micromechanics based finite element model (FEM) is developed to facilitate the design of a new production quality fiber reinforced plastic injection molded part. The composite part under study is composed of a polyetheretherketone (PEEK) matrix reinforced with 30% by volume fraction of short carbon fibers. The constitutive material models are obtained by using micromechanics based homogenization theories. The analysis is carried out by successfully coupling two commercial codes, Moldflow and ANSYS. Moldflow software is used to predict the fiber orientation by considering the flow kinetics and molding parameters. Material models are inputted into the commercial software ANSYS as per the predicted fiber orientation and the structural analysis is carried out. Thus in the present approach a coupling between two commercial codes namely Moldflow and ANSYS has been established to enable the analysis of the short fiber reinforced injection moulded composite parts. The load-deflection curve is obtained based on three constitutive material model namely an isotropy, transversely isotropy and orthotropy. Average values of the predicted quantities are compared to experimental results, obtaining a good correlation. In this manner, the coupled Moldflow-ANSYS model successfully predicts the load deflection curve of a composite injection molded part.
Open ISEmeter: An open hardware high-impedance interface for potentiometric detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvador, C.; Carbajo, J.; Mozo, J. D., E-mail: jdaniel.mozo@diq.uhu.es
In this work, a new open hardware interface based on Arduino to read electromotive force (emf) from potentiometric detectors is presented. The interface has been fully designed with the open code philosophy and all documentation will be accessible on web. The paper describes a comprehensive project including the electronic design, the firmware loaded on Arduino, and the Java-coded graphical user interface to load data in a computer (PC or Mac) for processing. The prototype was tested by measuring the calibration curve of a detector. As detection element, an active poly(vinyl chloride)-based membrane was used, doped with cetyltrimethylammonium dodecylsulphate (CTA{sup +}-DS{supmore » −}). The experimental measures of emf indicate Nernstian behaviour with the CTA{sup +} content of test solutions, as it was described in the literature, proving the validity of the developed prototype. A comparative analysis of performance was made by using the same chemical detector but changing the measurement instrumentation.« less
Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Marquart, Jed E.
2005-01-01
The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the objectives listed, the following tasks were accomplished: 1) Parametric Study Of Preconditioning Parameters And Other Code Inputs; 2) Code Modifications To Reduce Runtimes; 3) Investigation Of Compiler Options To Reduce Code Runtime; and 4) Development/Enhancement of Users Manuals for Aardvark and Phantom
Depletion optimization of lumped burnable poisons in pressurized water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kodah, Z.H.
1982-01-01
Techniques were developed to construct a set of basic poison depletion curves which deplete in a monotonical manner. These curves were combined to match a required optimized depletion profile by utilizing either linear or non-linear programming methods. Three computer codes, LEOPARD, XSDRN, and EXTERMINATOR-2 were used in the analyses. A depletion routine was developed and incorporated into the XSDRN code to allow the depletion of fuel, fission products, and burnable poisons. The Three Mile Island Unit-1 reactor core was used in this work as a typical PWR core. Two fundamental burnable poison rod designs were studied. They are a solidmore » cylindrical poison rod and an annular cylindrical poison rod with water filling the central region.These two designs have either a uniform mixture of burnable poisons or lumped spheroids of burnable poisons in the poison region. Boron and gadolinium are the two burnable poisons which were investigated in this project. Thermal self-shielding factor calculations for solid and annular poison rods were conducted. Also expressions for overall thermal self-shielding factors for one or more than one size group of poison spheroids inside solid and annular poison rods were derived and studied. Poison spheroids deplete at a slower rate than the poison mixture because each spheroid exhibits some self-shielding effects of its own. The larger the spheroid, the higher the self-shielding effects due to the increase in poison concentration.« less
GCS component development cycle
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti
2012-09-01
The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.
Composite beam analysis linear analysis of naturally curved and twisted anisotropic beams
NASA Astrophysics Data System (ADS)
Borri, Marco; Ghiringhelli, Gian L.; Merlini, Teodoro
1992-05-01
The aim of this report is to present a consistent theory for the deformation of a naturally curved and twisted anisotropic beam. The proposed formulation naturally extends the classical Saint-Venant approach to the case of curved and twisted anisotropic beams. The mathematical model developed under the assumption of span-wise uniform cross-section, curvature and twist, can take into account any kind of elastic coupling due to the material properties and the curved geometry. The consistency of the presented math-model and its generality about the cross-sectional shape, make it a useful tool even in a preliminary design optimization context such as the aeroelastic tailoring of helicopter rotor blades. The advantage of the present procedure is that it only requires a two-dimensional discretization; thus, very detailed analyses can be performed and interlaminar stresses between laminae can be evaluated. Such analyses would be extremely time consuming if performed with standard finite element codes: that prevents their recursive use as for example when optimizing a beam design. Moreover, as a byproduct of the proposed formulation, one obtains the constitutive law of the cross-section in terms of stress resultant and moment and their conjugate strain measures. This constitutive law takes into account any kind of elastic couplings, e.g., torsion-tension, tension-shear, bending-shear, and constitutes a fundamental input in aeroelastic analyses of helicopter blades. Four simple examples are given in order to show the principal features of the method.
Greene, E.A.; Shapiro, A.M.
1998-01-01
The Fortran code AIRSLUG can be used to generate the type curves needed to analyze the recovery data from prematurely terminated air-pressurized slug tests. These type curves, when used with a graphical software package, enable the engineer or scientist to analyze field tests to estimate transmissivity and storativity. Prematurely terminating the slug test can significantly reduce the overall time needed to conduct the test, especially at low-permeability sites, thus saving time and money.The Fortran code AIRSLUG can be used to generate the type curves needed to analyze the recovery data from prematurely terminated air-pressurized slug tests. These type curves, when used with a graphical software package, enable the engineer or scientist to analyze field tests to estimate transmissivity and storativity. Prematurely terminating the slug test can significantly reduce the overall time needed to conduct the test, especially at low-permeability sites, thus saving time and money.
NASA Astrophysics Data System (ADS)
Cunha, Diego M.; Tomal, Alessandra; Poletti, Martin E.
2013-04-01
In this work, the Monte Carlo (MC) code PENELOPE was employed for simulation of x-ray spectra in mammography and contrast-enhanced digital mammography (CEDM). Spectra for Mo, Rh and W anodes were obtained for tube potentials between 24-36 kV, for mammography, and between 45-49 kV, for CEDM. The spectra obtained from the simulations were analytically filtered to correspond to the anode/filter combinations usually employed in each technique (Mo/Mo, Rh/Rh and W/Rh for mammography and Mo/Cu, Rh/Cu and W/Cu for CEDM). For the Mo/Mo combination, the simulated spectra were compared with those obtained experimentally, and for spectra for the W anode, with experimental data from the literature, through comparison of distribution shape, average energies, half-value layers (HVL) and transmission curves. For all combinations evaluated, the simulated spectra were also compared with those provided by different models from the literature. Results showed that the code PENELOPE provides mammographic x-ray spectra in good agreement with those experimentally measured and those from the literature. The differences in the values of HVL ranged between 2-7%, for anode/filter combinations and tube potentials employed in mammography, and they were less than 5% for those employed in CEDM. The transmission curves for the spectra obtained also showed good agreement compared to those computed from reference spectra, with average relative differences less than 12% for mammography and CEDM. These results show that the code PENELOPE can be a useful tool to generate x-ray spectra for studies in mammography and CEDM, and also for evaluation of new x-ray tube designs and new anode materials.
The Role of Margin in Link Design and Optimization
NASA Technical Reports Server (NTRS)
Cheung, K.
2015-01-01
Link analysis is a system engineering process in the design, development, and operation of communication systems and networks. Link models that are mathematical abstractions representing the useful signal power and the undesirable noise and attenuation effects (including weather effects if the signal path transverses through the atmosphere) that are integrated into the link budget calculation that provides the estimates of signal power and noise power at the receiver. Then the link margin is applied which attempts to counteract the fluctuations of the signal and noise power to ensure reliable data delivery from transmitter to receiver. (Link margin is dictated by the link margin policy or requirements.) A simple link budgeting approach assumes link parameters to be deterministic values typically adopted a rule-of-thumb policy of 3 dB link margin. This policy works for most S- and X-band links due to their insensitivity to weather effects. But for higher frequency links like Ka-band, Ku-band, and optical communication links, it is unclear if a 3 dB link margin would guarantee link closure. Statistical link analysis that adopted the 2-sigma or 3-sigma link margin incorporates link uncertainties in the sigma calculation. (The Deep Space Network (DSN) link margin policies are 2-sigma for downlink and 3-sigma for uplink.) The link reliability can therefore be quantified statistically even for higher frequency links. However in the current statistical link analysis approach, link reliability is only expressed as the likelihood of exceeding the signal-to-noise ratio (SNR) threshold that corresponds to a given bit-error-rate (BER) or frame-error-rate (FER) requirement. The method does not provide the true BER or FER estimate of the link with margin, or the required signalto-noise ratio (SNR) that would meet the BER or FER requirement in the statistical sense. In this paper, we perform in-depth analysis on the relationship between BER/FER requirement, operating SNR, and coding performance curve, in the case when the channel coherence time of link fluctuation is comparable or larger than the time duration of a codeword. We compute the "true" SNR design point that would meet the BER/FER requirement by taking into account the fluctuation of signal power and noise power at the receiver, and the shape of the coding performance curve. This analysis yields a number of valuable insights on the design choices of coding scheme and link margin for the reliable data delivery of a communication system - space and ground. We illustrate the aforementioned analysis using a number of standard NASA error-correcting codes.
Pediconi, Federica; Catalano, Carlo; Venditti, Fiammetta; Ercolani, Mauro; Carotenuto, Luigi; Padula, Simona; Moriconi, Enrica; Roselli, Antonella; Giacomelli, Laura; Kirchin, Miles A; Passariello, Roberto
2005-07-01
The objective of this study was to evaluate the value of a color-coded automated signal intensity curve software package for contrast-enhanced magnetic resonance mammography (CE-MRM) in patients with suspected breast cancer. Thirty-six women with suspected breast cancer based on mammographic and sonographic examinations were preoperatively evaluated on CE-MRM. CE-MRM was performed on a 1.5-T magnet using a 2D Flash dynamic T1-weighted sequence. A dosage of 0.1 mmol/kg of Gd-BOPTA was administered at a flow rate of 2 mL/s followed by 10 mL of saline. Images were analyzed with the new software package and separately with a standard display method. Statistical comparison was performed of the confidence for lesion detection and characterization with the 2 methods and of the diagnostic accuracy for characterization compared with histopathologic findings. At pathology, 54 malignant lesions and 14 benign lesions were evaluated. All 68 (100%) lesions were detected with both methods and good correlation with histopathologic specimens was obtained. Confidence for both detection and characterization was significantly (P < or = 0.025) better with the color-coded method, although no difference (P > 0.05) between the methods was noted in terms of the sensitivity, specificity, and overall accuracy for lesion characterization. Excellent agreement between the 2 methods was noted for both the determination of lesion size (kappa = 0.77) and determination of SI/T curves (kappa = 0.85). The novel color-coded signal intensity curve software allows lesions to be visualized as false color maps that correspond to conventional signal intensity time curves. Detection and characterization of breast lesions with this method is quick and easily interpretable.
Preliminary Design Code for an Axial Stage Compressor
2001-09-01
8217 incidence angle correction factor for thickness Public dkt (3, 3) As Double ’ deviation angle correction factor for thickness Public i0ref(3, 3) As...202775.302703857 dkit(4) = 25013.8597869873 dkit(3) = -1269.01561832427 dkit(2) = 41.3428950682282 dkit(1) = 7.56794627627824 dkt (i, j) = CurveFit2...ikit(1), ikit(2), ikit(3), ikit(4), ikit(5), ikit(6), tc(i, j)) d0ref(i, j) = ksh * dkt (i, j) * d010(i, j) dref(i, j) = d0ref(i, j) + dm(i, j
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2014-04-01
We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
Structural response of existing spatial truss roof construction based on Cosserat rod theory
NASA Astrophysics Data System (ADS)
Miśkiewicz, Mikołaj
2018-04-01
Paper presents the application of the Cosserat rod theory and newly developed associated finite elements code as the tools that support in the expert-designing engineering practice. Mechanical principles of the 3D spatially curved rods, dynamics (statics) laws, principle of virtual work are discussed. Corresponding FEM approach with interpolation and accumulation techniques of state variables are shown that enable the formulation of the C0 Lagrangian rod elements with 6-degrees of freedom per node. Two test examples are shown proving the correctness and suitability of the proposed formulation. Next, the developed FEM code is applied to assess the structural response of the spatial truss roof of the "Olivia" Sports Arena Gdansk, Poland. The numerical results are compared with load test results. It is shown that the proposed FEM approach yields correct results.
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
NASA Astrophysics Data System (ADS)
Jia, Shouqing; La, Dongsheng; Ma, Xuelian
2018-04-01
The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.
GRay: A Massively Parallel GPU-based Code for Ray Tracing in Relativistic Spacetimes
NASA Astrophysics Data System (ADS)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
2013-11-01
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.
PMAnalyzer: a new web interface for bacterial growth curve analysis.
Cuevas, Daniel A; Edwards, Robert A
2017-06-15
Bacterial growth curves are essential representations for characterizing bacteria metabolism within a variety of media compositions. Using high-throughput, spectrophotometers capable of processing tens of 96-well plates, quantitative phenotypic information can be easily integrated into the current data structures that describe a bacterial organism. The PMAnalyzer pipeline performs a growth curve analysis to parameterize the unique features occurring within microtiter wells containing specific growth media sources. We have expanded the pipeline capabilities and provide a user-friendly, online implementation of this automated pipeline. PMAnalyzer version 2.0 provides fast automatic growth curve parameter analysis, growth identification and high resolution figures of sample-replicate growth curves and several statistical analyses. PMAnalyzer v2.0 can be found at https://edwards.sdsu.edu/pmanalyzer/ . Source code for the pipeline can be found on GitHub at https://github.com/dacuevas/PMAnalyzer . Source code for the online implementation can be found on GitHub at https://github.com/dacuevas/PMAnalyzerWeb . dcuevas08@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Guide to Using Onionskin Analysis Code (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Morzinski, Jerome Arthur
2016-09-15
This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillion, D.
This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.« less
Telemetry advances in data compression and channel coding
NASA Technical Reports Server (NTRS)
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
Positional glow curve simulation for thermoluminescent detector (TLD) system design
NASA Astrophysics Data System (ADS)
Branch, C. J.; Kearfott, K. J.
1999-02-01
Multi- and thin element dosimeters, variable heating rate schemes, and glow-curve analysis have been employed to improve environmental and personnel dosimetry using thermoluminescent detectors (TLDs). Detailed analysis of the effects of errors and optimization of techniques would be highly desirable. However, an understanding of the relationship between TL light production, light attenuation, and precise heating schemes is made difficult because of experimental challenges involved in measuring positional TL light production and temperature variations as a function of time. This work reports the development of a general-purpose computer code, thermoluminescent detector simulator, TLD-SIM, to simulate the heating of any TLD type using a variety of conventional and experimental heating methods including pulsed focused or unfocused lasers with Gaussian or uniform cross sections, planchet, hot gas, hot finger, optical, infrared, or electrical heating. TLD-SIM has been used to study the impact on the TL light production of varying the input parameters which include: detector composition, heat capacity, heat conductivity, physical size, and density; trapped electron density, the frequency factor of oscillation of electrons in the traps, and trap-conduction band potential energy difference; heating scheme source terms and heat transfer boundary conditions; and TL light scatter and attenuation coefficients. Temperature profiles and glow curves as a function of position time, as well as the corresponding temporally and/or spatially integrated glow values, may be plotted while varying any of the input parameters. Examples illustrating TLD system functions, including glow curve variability, will be presented. The flexible capabilities of TLD-SIM promises to enable improved TLD system design.
Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating
NASA Astrophysics Data System (ADS)
Chen, Liangji; Guo, Guangsong; Li, Huiying
2017-07-01
NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.
We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. Themore » resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.« less
Efficient computation of photonic crystal waveguide modes with dispersive material.
Schmidt, Kersten; Kappeler, Roman
2010-03-29
The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.
ASTRORAY: General relativistic polarized radiative transfer code
NASA Astrophysics Data System (ADS)
Shcherbakov, Roman V.
2014-07-01
ASTRORAY employs a method of ray tracing and performs polarized radiative transfer of (cyclo-)synchrotron radiation. The radiative transfer is conducted in curved space-time near rotating black holes described by Kerr-Schild metric. Three-dimensional general relativistic magneto hydrodynamic (3D GRMHD) simulations, in particular performed with variations of the HARM code, serve as an input to ASTRORAY. The code has been applied to reproduce the sub-mm synchrotron bump in the spectrum of Sgr A*, and to test the detectability of quasi-periodic oscillations in its light curve. ASTRORAY can be readily applied to model radio/sub-mm polarized spectra of jets and cores of other low-luminosity active galactic nuclei. For example, ASTRORAY is uniquely suitable to self-consistently model Faraday rotation measure and circular polarization fraction in jets.
Efficiency turns the table on neural encoding, decoding and noise.
Deneve, Sophie; Chalk, Matthew
2016-04-01
Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Mafalda; Seery, David; Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.
NASA-VOF2D: a computer program for incompressible flows with free surfaces
NASA Astrophysics Data System (ADS)
Torrey, M. D.; Cloutman, L. D.; Mjolsness, R. C.; Hirt, C. W.
1985-12-01
We present the NASA-VOF2D two-dimensional, transient, free-surface hydrodynamics program. It has a variety of options that provide capabilities for a wide range of applications, and it is designed to be relatively easy to use. It is based on the fractional volume-of-fluid method, and allows multiple free surfaces with surface tension and wall adhesion. It also has a partial cell treatment that allows curved boundaries and internal obstacles. This report includes a discussion of the numerical method, a code listing, and a selection of sample problems.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
3D MHD Simulations of Laser Plasma Guiding in Curved Magnetic Field
NASA Astrophysics Data System (ADS)
Roupassov, S.; Rankin, R.; Tsui, Y.; Capjack, C.; Fedosejevs, R.
1999-11-01
The guiding and confinement of laser produced plasma in a curved magnetic field has been investigated numerically. These studies were motivated by experiments on pulsed laser deposition of diamond-like films [1] in which a 1kG magnetic field in a curved solenoid geometry was utilized to steer a carbon plasma around a curved trajectory and thus to separate it from unwanted macroparticles produced by the laser ablation. The purpose of the modeling was to characterize the plasma dynamics during the propagation through the magnetic guide field and to investigate the effect of different magnetic field configurations. A 3D curvilinear ADI code developed on the basis of an existing Cartesian code [2] was employed to simulate the underlying resistive one-fluid MHD model. Issues such as large regions of low background density and nonreflective boundary conditions were addressed. Results of the simulations in a curved guide field will be presented and compared to experimental results. [1] Y.Y. Tsui, D. Vick and R. Fedosejevs, Appl. Phys. Lett. 70 (15), pp. 1953-57, 1997. [2] R. Rankin, and I. Voronkov, in "High Performance Computing Systems and Applications", pp. 59-69, Kluwer AP, 1998.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Van Rossum, Daniel R., E-mail: wollaeger@wisc.edu, E-mail: daan@flash.uchicago.edu
Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) are methods used to stochastically solve the radiative transport and diffusion equations, respectively. These methods combine into a hybrid transport-diffusion method we refer to as IMC-DDMC. We explore a multigroup IMC-DDMC scheme that in DDMC, combines frequency groups with sufficient optical thickness. We term this procedure ''opacity regrouping''. Opacity regrouping has previously been applied to IMC-DDMC calculations for problems in which the dependence of the opacity on frequency is monotonic. We generalize opacity regrouping to non-contiguous groups and implement this in SuperNu, a code designed to do radiation transport inmore » high-velocity outflows with non-monotonic opacities. We find that regrouping of non-contiguous opacity groups generally improves the speed of IMC-DDMC radiation transport. We present an asymptotic analysis that informs the nature of the Doppler shift in DDMC groups and summarize the derivation of the Gentile-Fleck factor for modified IMC-DDMC. We test SuperNu using numerical experiments including a quasi-manufactured analytic solution, a simple 10 group problem, and the W7 problem for Type Ia supernovae. We find that opacity regrouping is necessary to make our IMC-DDMC implementation feasible for the W7 problem and possibly Type Ia supernova simulations in general. We compare the bolometric light curves and spectra produced by the SuperNu and PHOENIX radiation transport codes for the W7 problem. The overall shape of the bolometric light curves are in good agreement, as are the spectra and their evolution with time. However, for the numerical specifications we considered, we find that the peak luminosity of the light curve calculated using SuperNu is ∼10% less than that calculated using PHOENIX.« less
Radiation Transport for Explosive Outflows: Opacity Regrouping
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.
2014-10-01
Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) are methods used to stochastically solve the radiative transport and diffusion equations, respectively. These methods combine into a hybrid transport-diffusion method we refer to as IMC-DDMC. We explore a multigroup IMC-DDMC scheme that in DDMC, combines frequency groups with sufficient optical thickness. We term this procedure "opacity regrouping." Opacity regrouping has previously been applied to IMC-DDMC calculations for problems in which the dependence of the opacity on frequency is monotonic. We generalize opacity regrouping to non-contiguous groups and implement this in SuperNu, a code designed to do radiation transport in high-velocity outflows with non-monotonic opacities. We find that regrouping of non-contiguous opacity groups generally improves the speed of IMC-DDMC radiation transport. We present an asymptotic analysis that informs the nature of the Doppler shift in DDMC groups and summarize the derivation of the Gentile-Fleck factor for modified IMC-DDMC. We test SuperNu using numerical experiments including a quasi-manufactured analytic solution, a simple 10 group problem, and the W7 problem for Type Ia supernovae. We find that opacity regrouping is necessary to make our IMC-DDMC implementation feasible for the W7 problem and possibly Type Ia supernova simulations in general. We compare the bolometric light curves and spectra produced by the SuperNu and PHOENIX radiation transport codes for the W7 problem. The overall shape of the bolometric light curves are in good agreement, as are the spectra and their evolution with time. However, for the numerical specifications we considered, we find that the peak luminosity of the light curve calculated using SuperNu is ~10% less than that calculated using PHOENIX.
Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard
2013-01-01
Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426
Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard
2013-11-01
With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2017-03-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2016-01-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437
Elbow stress indices using finite element analysis
NASA Astrophysics Data System (ADS)
Yu, Lixin
Section III of the ASME Boiler and Pressure Vessel Code (the Code) specifies rules for the design of nuclear power plant components. NB-3600 of the Code presents a simplified design method using stress indices---Scalar Coefficients used the modify straight pipe stress equations so that they can be applied to elbows, tees and other piping components. The stress indices of piping components are allowed to be determined both analytically and experimentally. This study concentrates on the determination of B2 stress indices for elbow components using finite element analysis (FEA). First, the previous theoretical, numerical and experimental investigations on elbow behavior were comprehensively reviewed, as was the philosophy behind the use of stress indices. The areas of further research was defined. Then, a comprehensive investigation was carried out to determine how the finite element method should be used to correctly simulate an elbow's structural behavior. This investigation included choice of element type, convergence of mesh density, use of boundary restraint and a reconciliation study between FEA and laboratory experiments or other theoretical formulations in both elastic and elasto-plastic domain. Results from different computer programs were also compared. Reasonably good reconciliation was obtained. Appendix II of the Code describes the experimental method to determine B2 stress indices based on load-deflection curves. This procedure was used to compute the B2 stress indices for various loading modes on one particular elbow configuration. The B2 stress indices thus determined were found to be about half of the value calculated from the Code equation. Then the effect on B2 stress indices of those factors such as internal pressure and flange attachments were studied. Finally, the investigation was extended to other configurations of elbow components. A parametric study was conducted on different elbow sizes and schedules. Regression analysis was then used to obtain a modified coefficient and exponent for the Code equation used to calculate B2 index for elbows.
ARBAN-A new method for analysis of ergonomic effort.
Holzmann, P
1982-06-01
ARBAN is a method for the ergonomic analysis of work, including work situations which involve widely differing body postures and loads. The idea of the method is thal all phases of the analysis process that imply specific knowledge on ergonomics are teken over by filming equipment and a computer routine. All tasks that must be carried out by the investigator in the process of analysis are so designed that they appear as evident by the use of systematic common sense. The ARBAN analysis method contains four steps: 1. Recording of the workplace situation on video or film. 2. Coding the posture and load situation at a number of closely spaced 'frozen' situations. 3. Computerisation. 4. Evaluation of the results. The computer calculates figures for the total ergonomic stress on the whole body as well as on different parts of the body separately. They are presented as 'Ergonomic stress/ time curves', where the heavy load situations occur as peaks of the curve. The work cycle may also be divided into different tasks, where the stress and duration patterns can be compared. The integral of the curves are calculated for single-figure comparison of different tasks as well as different work situations.
NASA Technical Reports Server (NTRS)
Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.
1992-01-01
Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.
GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparingmore » theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillion, D.
This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formates used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved. By identifying points on these lines and fitting their shapes by polyn.« less
Guo, Jin-Cheng; Wu, Yang; Chen, Yang; Pan, Feng; Wu, Zhi-Yong; Zhang, Jia-Sheng; Wu, Jian-Yi; Xu, Xiu-E; Zhao, Jian-Mei; Li, En-Min; Zhao, Yi; Xu, Li-Yan
2018-04-09
Esophageal squamous cell carcinoma (ESCC) is the predominant subtype of esophageal carcinoma in China. This study was to develop a staging model to predict outcomes of patients with ESCC. Using Cox regression analysis, principal component analysis (PCA), partitioning clustering, Kaplan-Meier analysis, receiver operating characteristic (ROC) curve analysis, and classification and regression tree (CART) analysis, we mined the Gene Expression Omnibus database to determine the expression profiles of genes in 179 patients with ESCC from GSE63624 and GSE63622 dataset. Univariate cox regression analysis of the GSE63624 dataset revealed that 2404 protein-coding genes (PCGs) and 635 long non-coding RNAs (lncRNAs) were associated with the survival of patients with ESCC. PCA categorized these PCGs and lncRNAs into three principal components (PCs), which were used to cluster the patients into three groups. ROC analysis demonstrated that the predictive ability of PCG-lncRNA PCs when applied to new patients was better than that of the tumor-node-metastasis staging (area under ROC curve [AUC]: 0.69 vs. 0.65, P < 0.05). Accordingly, we constructed a molecular disaggregated model comprising one lncRNA and two PCGs, which we designated as the LSB staging model using CART analysis in the GSE63624 dataset. This LSB staging model classified the GSE63622 dataset of patients into three different groups, and its effectiveness was validated by analysis of another cohort of 105 patients. The LSB staging model has clinical significance for the prognosis prediction of patients with ESCC and may serve as a three-gene staging microarray.
Fast evolving pair-instability supernovae
Kozyreva, Alexandra; Gilmer, Matthew; Hirschi, Raphael; ...
2016-10-06
With an increasing number of superluminous supernovae (SLSNe) discovered the ques- tion of their origin remains open and causes heated debates in the supernova commu- nity. Currently, there are three proposed mechanisms for SLSNe: (1) pair-instability supernovae (PISN), (2) magnetar-driven supernovae, and (3) models in which the su- pernova ejecta interacts with a circumstellar material ejected before the explosion. Based on current observations of SLSNe, the PISN origin has been disfavoured for a number of reasons. Many PISN models provide overly broad light curves and too reddened spectra, because of massive ejecta and a high amount of nickel. In themore » cur- rent study we re-examine PISN properties using progenitor models computed with the GENEC code. We calculate supernova explosions with FLASH and light curve evolu- tion with the radiation hydrodynamics code STELLA. We find that high-mass models (200 M⊙ and 250 M⊙) at relatively high metallicity (Z=0.001) do not retain hydro- gen in the outer layers and produce relatively fast evolving PISNe Type I and might be suitable to explain some SLSNe. We also investigate uncertainties in light curve modelling due to codes, opacities, the nickel-bubble effect and progenitor structure and composition.« less
Pyrolysis Model Development for a Multilayer Floor Covering
McKinnon, Mark B.; Stoliarov, Stanislav I.
2015-01-01
Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556
Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime
2016-01-01
It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
NASA Astrophysics Data System (ADS)
Hopkin, D. J.; El-Rimawi, J.; Lennon, T.; Silberschmidt, V. V.
2011-07-01
The advent of the structural Eurocodes has allowed civil engineers to be more creative in the design of structures exposed to fire. Rather than rely upon regulatory guidance and prescriptive methods engineers are now able to use such codes to design buildings on the basis of credible design fires rather than accepted unrealistic standard-fire time-temperature curves. Through this process safer and more efficient structural designs are achievable. The key development in enabling performance-based fire design is the emergence of validated numerical models capable of predicting the mechanical response of a whole building or sub-assemblies at elevated temperature. In such a way, efficiency savings have been achieved in the design of steel, concrete and composite structures. However, at present, due to a combination of limited fundamental research and restrictions in the UK National Annex to the timber Eurocode, the design of fire-exposed timber structures using numerical modelling techniques is not generally undertaken. The 'fire design' of timber structures is covered in Eurocode 5 part 1.2 (EN 1995-1-2). In this code there is an advanced calculation annex (Annex B) intended to facilitate the implementation of numerical models in the design of fire-exposed timber structures. The properties contained in the code can, at present, only be applied to standard-fire exposure conditions. This is due to existing limitations related to the available thermal properties which are only valid for standard fire exposure. In an attempt to overcome this barrier the authors have proposed a 'modified conductivity model' (MCM) for determining the temperature of timber structural elements during the heating phase of non-standard fires. This is briefly outlined in this paper. In addition, in a further study, the MCM has been implemented in a coupled thermo-mechanical analysis of uniaxially loaded timber elements exposed to non-standard fires. The finite element package DIANA was adopted with plane-strain elements assuming two-dimensional heat flow. The resulting predictions of failure time for given levels of load are discussed and compared with the simplified 'effective cross section' method presented in EN 1995-1-2.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
NASA Astrophysics Data System (ADS)
Best, Robert W.; Urbanus, Wim H.; Verhoeven, Toon (A.)G. A.; Jerby, Eli; Ganzel, Ronit
1993-07-01
A 1 MW cw 200 GHz tunable efficient free electron maser is being designed at the FOM Institute for application in magnetic fusion research. In this paper several waveguide types are considered, including open waveguides. Computer simulations of the amplification and guiding of the mm wave in the undulator are reported. The simulation code is G3DH, written by E. Jerby, which solves a matrix dispersion relation. Gain vs frequency curves are shown. Efficiency calculations indicate that some tapering is needed to reach the desired 1 MW mm wavepower. Simulations of a tapered undulator are presented by Caplan, and overview of the FOM FEM is given by Urbanus et al. at this conference.
Selected Aspects of Cryogenic Tank Fatigue Calculations for Offshore Application
NASA Astrophysics Data System (ADS)
Skrzypacz, J.; Jaszak, P.
2018-02-01
The paper presents the way of the fatigue life calculation of a cryogenic tank dedicated for the carriers ship application. The independent tank type C was taken into consideration. The calculation took into account a vast range of the load spectrum resulting in the ship accelerations. The stress at the most critical point of the tank was determined by means of the finite element method. The computation methods and codes used in the design of the LNG tank were presented. The number of fatigue cycles was determined by means of S-N curve. The cumulated linear damage theory was used to determine life factor.
The temperature dependence of the tensile properties of thermally treated Alloy 690 tubing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrod, D.L.; Gold, R.E.; Larsson, B.
1992-12-31
Tensile tests were run in air on full tube cross-sections of 22.23 mm OD by 1.27 mm wall thickness Alloy 690 steam generator production tubes from ten (10) heats of material at eight (8) temperatures between room temperature and 760{degrees}C. The tubing was manufactured to specification requirements consistent with the EPRI guidelines for Alloy 690 tubing. The room temperature stress-strain curves are described quite well by the Voce equation. Ductile fracture by dimpled rupture was observed at all test temperatures. The elevated temperature tensile properties are compared with design data given in the ASME Code.
Geometric effects in the electronic transport of deformed nanotubes
NASA Astrophysics Data System (ADS)
Santos, Fernando; Fumeron, Sébastien; Berche, Bertrand; Moraes, Fernando
2016-04-01
Quasi-two-dimensional systems may exibit curvature, which adds three-dimensional influence to their internal properties. As shown by da Costa (1981 Phys. Rev. A 23 1982-7), charged particles moving on a curved surface experience a curvature-dependent potential which greatly influence their dynamics. In this paper, we study the electronic ballistic transport in deformed nanotubes. The one-electron Schrödinger equation with open boundary conditions is solved numerically with a flexible MAPLE code made available as supplementary data. We find that the curvature of the deformations indeed has strong effects on the electron dynamics, suggesting its use in the design of nanotube-based electronic devices.
Analysis and Design of ITER 1 MV Core Snubber
NASA Astrophysics Data System (ADS)
Wang, Haitian; Li, Ge
2012-11-01
The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.
Zhou, Zhongqiang; Kecman, Maja; Chen, Tingting; Liu, Tianyu; Jin, Ling; Chen, Shangji; Chen, Qianyun; He, Mingguang; Silver, Josh; Moore, Bruce; Congdon, Nathan
2014-01-01
To identify the specific characteristics making glasses designs, particularly those compatible with adjustable glasses, more or less appealing to Chinese children and their parents. Primary and secondary school children from urban and rural China with < = -1.00 diopters of bilateral myopia and their parents ranked four conventional-style frames identified by local optical shops as popular versus four child-specific frames compatible with adjustable spectacles. Scores based on the proportion of maximum possible ranking were computed for each style. Selected children and their parents also participated in Focus Groups (FGs) discussing spectacle design preference. Recordings were transcribed and coded by two independents reviewers using NVivo software. Among 136 urban primary school children (age range 9-11 years), 290 rural secondary school children (11-17 years) and 16 parents, all adjustable-style frames (scores on 0-100 scale 25.7-62.4) were ranked behind all conventional frames (63.0-87.5). For eight FGs including 12 primary children, 26 secondary children and 16 parents, average kappa values for NVivo coding were 0.81 (students) and 0.70 (parents). All groups agreed that the key changes to make adjustable designs more attractive were altering the round lenses to rectangular or oval shapes and adding curved earpieces for more stable wear. The thick frames of the adjustable designs were considered stylish, and children indicated they would wear them if the lens shape were modified. Current adjustable lens designs are unattractive to Chinese children and their parents, though this study identified specific modifications which would make them more appealing.
Comparisons between MCNP, EGS4 and experiment for clinical electron beams.
Jeraj, R; Keall, P J; Ostwald, P M
1999-03-01
Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high-Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high-Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation.
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1987-01-01
New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.
The Rhythm Aftereffect: Support for Time Sensitive Neurons with Broad Overlapping Tuning Curves
ERIC Educational Resources Information Center
Becker, Mark W.; Rasmussen, Ian P.
2007-01-01
Ivry [Ivry, R. B. (1996). The representation of temporal information in perception and motor control. Current Opinion in Neurobiology, 6, 851-857.] proposed that explicit coding of brief time intervals is accomplished by neurons that are tuned to a preferred temporal interval and have broad overlapping tuning curves. This proposal is analogous to…
MICA: Multiple interval-based curve alignment
NASA Astrophysics Data System (ADS)
Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf
2018-01-01
MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Leckey, Cara A C; Wheeler, Kevin R; Hafiychuk, Vasyl N; Hafiychuk, Halyna; Timuçin, Doğan A
2018-03-01
Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools. Published by Elsevier B.V.
XPATCH: a high-frequency electromagnetic scattering prediction code using shooting and bouncing rays
NASA Astrophysics Data System (ADS)
Hazlett, Michael; Andersh, Dennis J.; Lee, Shung W.; Ling, Hao; Yu, C. L.
1995-06-01
This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time domain signatures, and synthetic aperture radar (SAR) images of realistic 3-D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, curved surfaces, or solid geometries. The computer code, XPATCH, based on the shooting and bouncing ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. XPATCH computes the first-bounce physical optics plus the physical theory of diffraction contributions and the multi-bounce ray contributions for complex vehicles with materials. It has been found that the multi-bounce contributions are crucial for many aspect angles of all classes of vehicles. Without the multi-bounce calculations, the radar return is typically 10 to 15 dB too low. Examples of predicted range profiles, SAR imagery, and radar cross sections (RCS) for several different geometries are compared with measured data to demonstrate the quality of the predictions. The comparisons are from the UHF through the Ka frequency ranges. Recent enhancements to XPATCH for MMW applications and target Doppler predictions are also presented.
Lin, Kun-Jhih; Huang, Chang-Hung; Liu, Yu-Liang; Chen, Wen-Chuan; Chang, Tsung-Wei; Yang, Chan-Tsung; Lai, Yu-Shu; Cheng, Cheng-Kung
2011-10-01
The post-cam design of contemporary posterior stabilized knee prosthesis can be categorized into flat-on-flat or curve-on-curve contact surfaces. The curve-on-curve design has been demonstrated its advantage of reducing stress concentration when the knee sustained an anteroposterior force with tibial rotation. How the post-cam design affects knee kinematics is still unknown, particularly, to compare the difference between the two design features. Analyzing knee kinematics of posterior stabilized knee prosthesis with various post-cam designs should provide certain instructions to the modification of prosthesis design. A dynamic knee model was utilized to investigate tibiofemoral motion of various post-cam designs during high knee flexion. Two posterior stabilized knee models were constructed with flat-on-flat and curve-on-curve contact surfaces of post-cam. Dynamic data of axial tibial rotation and femoral translation were measured from full-extension to 135°. Internal tibial rotation increased with knee flexion in both designs. Before post-cam engagement, the magnitude of internal tibial rotation was close in the two designs. However, tibial rotation angle decreased beyond femoral cam engaged with tibial post. The rate of reduction of tibial rotation was relatively lower in the curve-on-curve design. From post-cam engagement to extreme flexion, the curve-on-curve design had greater internal tibial rotation. Motion constraint was generated by medial impingement of femoral cam on tibial post. It would interfere with the axial motion of the femur relative to the tibia, resulting in decrease of internal tibial rotation. Elimination of rotational constraint should be necessary for achieving better tibial rotation during high knee flexion. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
A Fatigue Life Prediction Model of Welded Joints under Combined Cyclic Loading
NASA Astrophysics Data System (ADS)
Goes, Keurrie C.; Camarao, Arnaldo F.; Pereira, Marcos Venicius S.; Ferreira Batalha, Gilmar
2011-01-01
A practical and robust methodology is developed to evaluate the fatigue life in seam welded joints when subjected to combined cyclic loading. The fatigue analysis was conducted in virtual environment. The FE stress results from each loading were imported to fatigue code FE-Fatigue and combined to perform the fatigue life prediction using the S x N (stress x life) method. The measurement or modelling of the residual stresses resulting from the welded process is not part of this work. However, the thermal and metallurgical effects, such as distortions and residual stresses, were considered indirectly through fatigue curves corrections in the samples investigated. A tube-plate specimen was submitted to combined cyclic loading (bending and torsion) with constant amplitude. The virtual durability analysis result was calibrated based on these laboratory tests and design codes such as BS7608 and Eurocode 3. The feasibility and application of the proposed numerical-experimental methodology and contributions for the technical development are discussed. Major challenges associated with this modelling and improvement proposals are finally presented.
LDPC Codes--Structural Analysis and Decoding Techniques
ERIC Educational Resources Information Center
Zhang, Xiaojie
2012-01-01
Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
Physical Insights, Steady Aerodynamic Effects, and a Design Tool for Low-Pressure Turbine Flutter
NASA Astrophysics Data System (ADS)
Waite, Joshua Joseph
The successful, efficient, and safe turbine design requires a thorough understanding of the underlying physical phenomena. This research investigates the physical understanding and parameters highly correlated to flutter, an aeroelastic instability prevalent among low pressure turbine (LPT) blades in both aircraft engines and power turbines. The modern way of determining whether a certain cascade of LPT blades is susceptible to flutter is through time-expensive computational fluid dynamics (CFD) codes. These codes converge to solution satisfying the Eulerian conservation equations subject to the boundary conditions of a nodal domain consisting fluid and solid wall particles. Most detailed CFD codes are accompanied by cryptic turbulence models, meticulous grid constructions, and elegant boundary condition enforcements all with one goal in mind: determine the sign (and therefore stability) of the aerodynamic damping. The main question being asked by the aeroelastician, "is it positive or negative?'' This type of thought-process eventually gives rise to a black-box effect, leaving physical understanding behind. Therefore, the first part of this research aims to understand and reveal the physics behind LPT flutter in addition to several related topics including acoustic resonance effects. A percentage of this initial numerical investigation is completed using an influence coefficient approach to study the variation the work-per-cycle contributions of neighboring cascade blades to a reference airfoil. The second part of this research introduces new discoveries regarding the relationship between steady aerodynamic loading and negative aerodynamic damping. Using validated CFD codes as computational wind tunnels, a multitude of low-pressure turbine flutter parameters, such as reduced frequency, mode shape, and interblade phase angle, will be scrutinized across various airfoil geometries and steady operating conditions to reach new design guidelines regarding the influence of steady aerodynamic loading and LPT flutter. Many pressing topics influencing LPT flutter including shocks, their nonlinearity, and three-dimensionality are also addressed along the way. The work is concluded by introducing a useful preliminary design tool that can estimate within seconds the entire aerodynamic damping versus nodal diameter curve for a given three-dimensional cascade.
Cho, H-H; Cheon, J-E; Kim, S-K; Choi, Y H; Kim, I-O; Kim, W S; Lee, S-M; You, S K; Shin, S-M
2016-05-01
For the postoperative follow-up in pediatric patients with Moyamoya disease, it is essential to evaluate the degree of neovascularization status. Our aim was to quantitatively assess the neovascularization status after bypass surgery in pediatric Moyamoya disease by using color-coded digital subtraction angiography. Time-attenuation intensity curves were generated at ROIs corresponding to surgical flap sites from color-coded DSA images of the common carotid artery, internal carotid artery, and external carotid artery angiograms obtained pre- and postoperatively in 32 children with Moyamoya disease. Time-to-peak and area under the curve values were obtained. Postoperative changes in adjusted time-to-peak (ΔTTP) and ratios of adjusted area under the curve changes (ΔAUC ratio) of common carotid artery, ICA, and external carotid artery angiograms were compared across clinical and angiographic outcome groups. To analyze diagnostic performance, we categorized clinical outcomes into favorable and unfavorable groups. The ΔTTP at the common carotid artery increased among clinical and angiographic outcomes, in that order, with significant differences (P = .003 and .005, respectively). The ΔAUC ratio at the common carotid artery and external carotid artery also increased, in that order, among clinical and angiographic outcomes with a significant difference (all, P = .000). The ΔAUC ratio of ICA showed no significant difference among clinical and angiographic outcomes (P = .418 and .424, respectively). The ΔTTP for the common carotid artery of >1.27 seconds and the ΔAUC ratio of >33.5% for the common carotid artery and 504% for the external carotid artery are revealed as optimal cutoff values between favorable and unfavorable groups. Postoperative changes in quantitative values obtained with color-coded DSA software showed a significant correlation with outcome scores and can be used as objective parameters for predicting the outcome in pediatric Moyamoya disease, with an additional cutoff value calculated through the receiver operating characteristic curve. © 2016 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Computational simulation of matrix micro-slip bands in SiC/Ti-15 composite
NASA Technical Reports Server (NTRS)
Mital, S. K.; Lee, H.-J.; Murthy, P. L. N.; Chamis, C. C.
1992-01-01
Computational simulation procedures are used to identify the key deformation mechanisms for (0)(sub 8) and (90)(sub 8) SiC/Ti-15 metal matrix composites. The computational simulation procedures employed consist of a three-dimensional finite-element analysis and a micromechanics based computer code METCAN. The interphase properties used in the analysis have been calibrated using the METCAN computer code with the (90)(sub 8) experimental stress-strain curve. Results of simulation show that although shear stresses are sufficiently high to cause the formation of some slip bands in the matrix concentrated mostly near the fibers, the nonlinearity in the composite stress-strain curve in the case of (90)(sub 8) composite is dominated by interfacial damage, such as microcracks and debonding rather than microplasticity. The stress-strain curve for (0)(sub 8) composite is largely controlled by the fibers and shows only slight nonlinearity at higher strain levels that could be the result of matrix microplasticity.
A two step method to treat variable winds in fallout smearing codes. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, A.T.
1982-03-01
A method was developed to treat non-constant winds in fallout smearing codes. The method consists of two steps: (1) location of the curved hotline (2) determination of the off-hotline activity. To locate the curved hotline, the method begins with an initial cloud of 20 discretely-sized pancake clouds, located at altitudes determined by weapon yield. Next, the particles are tracked through a 300 layer atmosphere, translating with different winds in each layer. The connection of the 20 particles' impact points is the fallout hotline. The hotline location was found to be independent of the assumed particle size distribution in the stabilizedmore » cloud. The off-hotline activity distribution is represented as a two-dimensional gaussian function, centered on the curved hotline. Hotline locator model results were compared to numerical calculations of hypothetical 100 kt burst and to the actual hotline produced by the Castle-Bravo 15 Mt nuclear test.« less
Tan, Michael; Wilson, Ian; Braganza, Vanessa; Ignatiadis, Sophia; Boston, Ray; Sundararajan, Vijaya; Cook, Mark J; D'Souza, Wendyl J
2015-10-01
We report the diagnostic validity of a selection algorithm for identifying epilepsy cases. Retrospective validation study of International Classification of Diseases 10th Revision Australian Modification (ICD-10AM)-coded hospital records and pharmaceutical data sampled from 300 consecutive potential epilepsy-coded cases and 300 randomly chosen cases without epilepsy from 3/7/2012 to 10/7/2013. Two epilepsy specialists independently validated the diagnosis of epilepsy. A multivariable logistic regression model was fitted to identify the optimum coding algorithm for epilepsy and was internally validated. One hundred fifty-eight out of three hundred (52.6%) epilepsy-coded records and 0/300 (0%) nonepilepsy records were confirmed to have epilepsy. The kappa for interrater agreement was 0.89 (95% CI=0.81-0.97). The model utilizing epilepsy (G40), status epilepticus (G41) and ≥1 antiepileptic drug (AED) conferred the highest positive predictive value of 81.4% (95% CI=73.1-87.9) and a specificity of 99.9% (95% CI=99.9-100.0). The area under the receiver operating curve was 0.90 (95% CI=0.88-0.93). When combined with pharmaceutical data, the precision of case identification for epilepsy data linkage design was considerably improved and could provide considerable potential for efficient and reasonably accurate case ascertainment in epidemiological studies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Meyer, Harold D.
1999-01-01
This report provides a study of rotor and stator scattering using the SOURCE3D Rotor Wake/Stator Interaction Code. SOURCE3D is a quasi-three-dimensional computer program that uses three-dimensional acoustics and two-dimensional cascade load response theory to calculate rotor and stator modal reflection and transmission (scattering) coefficients. SOURCE3D is at the core of the TFaNS (Theoretical Fan Noise Design/Prediction System), developed for NASA, which provides complete fully coupled (inlet, rotor, stator, exit) noise solutions for turbofan engines. The reason for studying scattering is that we must first understand the behavior of the individual scattering coefficients provided by SOURCE3D, before eventually understanding the more complicated predictions from TFaNS. To study scattering, we have derived a large number of scattering curves for vane and blade rows. The curves are plots of output wave power divided by input wave power (in dB units) versus vane/blade ratio. Some of these plots are shown in this report. All of the plots are provided in a separate volume. To assist in understanding the plots, formulas have been derived for special vane/blade ratios for which wavefronts are either parallel or normal to rotor or stator chords. From the plots, we have found that, for the most part, there was strong transmission and weak reflection over most of the vane/blade ratio range for the stator. For the rotor, there was little transmission loss.
Software Development for Asteroid and Variable Star Research
NASA Astrophysics Data System (ADS)
Sweckard, Teaghen; Clason, Timothy; Kenney, Jessica; Wuerker, Wolfgang; Palser, Sage; Giles, Tucker; Linder, Tyler; Sanchez, Richard
2018-01-01
The process of collecting and analyzing light curves from variable stars and asteroids is almost identical. In 2016 a collaboration was created to develop a simple fundamental way to study both asteroids and variable stars using methods that would allow the process to be repeated by middle school and high school students.Using robotic telescopes at Cerro Tololo (Chile), Yerkes Observatory (US), and Stone Edge Observatory (US) data were collected on RV Del and three asteroids. It was discovered that the only available software program which could be easily installed on lab computers was MPO Canopus. However, after six months it was determined that MPO Canopus was not an acceptable option because of the steep learning curve, lack of documentation and technical support.Therefore, the project decided that the best option was to design our own python based software. Using python and python libraries we developed code that can be used for photometry and can be easily changed to the user's needs. We accomplished this by meeting with our mentor astronomer, Tyler Linder, and in the beginning wrote two different programs, one for asteroids and one for variable stars. In the end, though, we chose to combine codes so that the program would be capable of performing photometry for both moving and static objects.The software performs differential photometry by comparing the magnitude of known reference stars to the object being studied. For asteroids, the image timestamps are used to obtain ephemeris of the asteroid from JPL Horizons automatically.
Size Effects in Impact Damage of Composite Sandwich Panels
NASA Technical Reports Server (NTRS)
Dobyns, Alan; Jackson, Wade
2003-01-01
Panel size has a large effect on the impact response and resultant damage level of honeycomb sandwich panels. It has been observed during impact testing that panels of the same design but different panel sizes will show large differences in damage when impacted with the same impact energy. To study this effect, a test program was conducted with instrumented impact testing of three different sizes of sandwich panels to obtain data on panel response and residual damage. In concert with the test program. a closed form analysis method was developed that incorporates the effects of damage on the impact response. This analysis method will predict both the impact response and the residual damage of a simply-supported sandwich panel impacted at any position on the panel. The damage is incorporated by the use of an experimental load-indentation curve obtained for the face-sheet/honeycomb and indentor combination under study. This curve inherently includes the damage response and can be obtained quasi-statically from a rigidly-backed specimen or a specimen with any support conditions. Good correlation has been obtained between the test data and the analysis results for the maximum force and residual indentation. The predictions can be improved by using a dynamic indentation curve. Analyses have also been done using the MSC/DYTRAN finite element code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hongxiang; Sun, Ning; Wigmosta, Mark
Precipitation-based intensity-duration-frequency (PREC-IDF) curves are a standard tool used to derive design floods for hydraulic infrastructure worldwide. In snow-dominated regions where a large percentage of flood events are caused by snowmelt and rain-on-snow events, the PREC-IDF design approach can lead to substantial underestimation/overestimation of design floods and associated infrastructure. In this study, next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface, are introduced into the design process to improve hydrologic design. The authors compared peak design flood estimates from the National Resource Conservation Service TR-55 hydrologic model driven by NG-IDF and PREC-IDF curves at 399 Snowpackmore » Telemetry (SNOTEL) stations across the western United States, all of which had at least 30 years of high-quality records. They found that about 72% of the stations in the western United States showed the potential for underdesign, for which the PREC-IDF curves underestimated peak design floods by as much as 324%. These results demonstrated the need to update the use of PREC-IDF curves to the use of NG-IDF curves for hydrologic design in snow-dominated regions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hongxiang; Sun, Ning; Wigmosta, Mark S.
Precipitation-based intensity-duration-frequency (PREC-IDF) curves are a standard tool used to derive design floods for hydraulic infrastructure worldwide. In snow-dominated regions where a large percentage of flood events are caused by snowmelt and rain-on-snow events, the PREC-IDF design approach can lead to substantial underestimation/overestimation of design floods and associated infrastructure. In this study, next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface, are introduced into the design process to improve hydrologic design. The authors compared peak design flood estimates from the National Resource Conservation Service TR-55 hydrologic model driven by NG-IDF and PREC-IDF curves at 399 Snowpackmore » Telemetry (SNOTEL) stations across the western United States, all of which had at least 30 years of high-quality records. They found that about 72% of the stations in the western United States showed the potential for underdesign, for which the PREC-IDF curves underestimated peak design floods by as much as 324%. These results demonstrated the need to update the use of PREC-IDF curves to the use of NG-IDF curves for hydrologic design in snow-dominated regions.« less
Intra-binary Shock Heating of Black Widow Companions
NASA Astrophysics Data System (ADS)
Romani, Roger W.; Sanchez, Nicolas
2016-09-01
The low-mass companions of evaporating binary pulsars (black widows and similar) are strongly heated on the side facing the pulsar. However, in high-quality photometric and spectroscopic data, the heating pattern does not match that expected for direct pulsar illumination. Here we explore a model where the pulsar power is intercepted by an intra-binary shock (IBS) before heating the low-mass companion. We develop a simple analytic model and implement it in the popular “ICARUS” light curve code. The model is parameterized by the wind momentum ratio β and the companion wind speed {f}v{v}{{orb}}, and assumes that the reprocessed pulsar wind emits prompt particles or radiation to heat the companion surface. We illustrate an interesting range of light curve asymmetries controlled by these parameters. The code also computes the IBS synchrotron emission pattern, and thus can model black widow X-ray light curves. As a test, we apply the results to the high-quality asymmetric optical light curves of PSR J2215+5135; the resulting fit gives a substantial improvement upon direct heating models and produces an X-ray light curve consistent with that seen. The IBS model parameters imply that at the present loss rate, the companion evaporation has a characteristic timescale of {τ }{{evap}}≈ 150 Myr. Still, the model is not fully satisfactory, indicating that there are additional unmodeled physical effects.
NASA Technical Reports Server (NTRS)
Rivers, Melissa; Hunter, Craig; Vatsa, Veer
2017-01-01
Two Navier-Stokes codes were used to compute flow over the High-Lift Common Research Model (HL-CRM) in preparation for a wind tunnel test to be performed at the NASA Langley Research Center 14-by-22-Foot Subsonic Tunnel in fiscal year 2018. Both flight and wind tunnel conditions were simulated by the two codes at set Mach numbers and Reynolds numbers over a full angle-of-attack range for three configurations: cruise, landing and takeoff. Force curves, drag polars and surface pressure contour comparisons are shown for the two codes. The lift and drag curves compare well for the cruise configuration up to 10deg angle of attack but not as well for the other two configurations. The drag polars compare reasonably well for all three configurations. The surface pressure contours compare well for some of the conditions modeled but not as well for others.
An analytical study of reduced-gravity flow dynamics
NASA Technical Reports Server (NTRS)
Bradshaw, R. D.; Kramer, J. L.; Zich, J. L.
1976-01-01
Addition of surface tension forces to a marker-and-cell code and the performance of four incompressible fluid simulations in reduced gravity, were studied. This marker-and-cell code has a variable grid capability with arbitrary curved boundaries and time dependent acceleration fields. The surface tension logic includes a spline fit of surface marker particles as well as contact angle logic for straight and curved wall boundaries. Three types of flow motion were simulated with the improved code: impulsive settling in a model Centaur LH2 tank, continuous settling in a model and full scale Centaur LO2 tank and mixing in a Centaur LH2 tank. The impulsive settling case confirmed a drop tower analysis which indicated more orderly fluid collection flow patterns with this method providing a potential savings in settling propellants. In the LO2 tank, fluid collection and flow simulation into the thrust barrel were achieved. The mixing simulation produced good results indicating both the development of the flow field and fluid interface behavior.
Transportable Applications Environment Plus, Version 5.1
NASA Technical Reports Server (NTRS)
1994-01-01
Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hongxiang; Sun, Ning; Wigmosta, Mark
There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial over-/under-estimation of design basis events and subsequent over-/under-design of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme eventsmore » at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to under-design, many with significant under-estimation of 100-year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for under-design were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for over-design at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.« less
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
Optimization of Acoustic Pressure Measurements for Impedance Eduction
NASA Technical Reports Server (NTRS)
Jones, M. G.; Watson, W. R.; Nark, D. M.
2007-01-01
As noise constraints become increasingly stringent, there is continued emphasis on the development of improved acoustic liner concepts to reduce the amount of fan noise radiated to communities surrounding airports. As a result, multiple analytical prediction tools and experimental rigs have been developed by industry and academia to support liner evaluation. NASA Langley has also placed considerable effort in this area over the last three decades. More recently, a finite element code (Q3D) based on a quasi-3D implementation of the convected Helmholtz equation has been combined with measured data acquired in the Langley Grazing Incidence Tube (GIT) to reduce liner impedance in the presence of grazing flow. A new Curved Duct Test Rig (CDTR) has also been developed to allow evaluation of liners in the presence of grazing flow and controlled, higher-order modes, with straight and curved waveguides. Upgraded versions of each of these two test rigs are expected to begin operation by early 2008. The Grazing Flow Impedance Tube (GFIT) will replace the GIT, and additional capabilities will be incorporated into the CDTR. The current investigation uses the Q3D finite element code to evaluate some of the key capabilities of these two test rigs. First, the Q3D code is used to evaluate the microphone distribution designed for the GFIT. Liners ranging in length from 51 to 610 mm are investigated to determine whether acceptable impedance eduction can be achieved with microphones placed on the wall opposite the liner. This analysis indicates the best results are achieved for liner lengths of at least 203 mm. Next, the effects of moving this GFIT microphone array to the wall adjacent to the liner are evaluated, and acceptable results are achieved if the microphones are placed off the centerline. Finally, the code is used to investigate potential microphone placements in the CDTR rigid wall adjacent to the wall containing an acoustic liner, to determine if sufficient fidelity can be achieved with 32 microphones available for this purpose. Initial results indicate 32 microphones can provide acceptable measurements to support impedance eduction with this test rig.
Zhou, Zhongqiang; Kecman, Maja; Chen, Tingting; Liu, Tianyu; Jin, Ling; Chen, Shangji; Chen, Qianyun; He, Mingguang; Silver, Josh; Moore, Bruce; Congdon, Nathan
2014-01-01
Purpose To identify the specific characteristics making glasses designs, particularly those compatible with adjustable glasses, more or less appealing to Chinese children and their parents. Patients and Methods Primary and secondary school children from urban and rural China with < = −1.00 diopters of bilateral myopia and their parents ranked four conventional-style frames identified by local optical shops as popular versus four child-specific frames compatible with adjustable spectacles. Scores based on the proportion of maximum possible ranking were computed for each style. Selected children and their parents also participated in Focus Groups (FGs) discussing spectacle design preference. Recordings were transcribed and coded by two independents reviewers using NVivo software. Results Among 136 urban primary school children (age range 9–11 years), 290 rural secondary school children (11–17 years) and 16 parents, all adjustable-style frames (scores on 0–100 scale 25.7–62.4) were ranked behind all conventional frames (63.0–87.5). For eight FGs including 12 primary children, 26 secondary children and 16 parents, average kappa values for NVivo coding were 0.81 (students) and 0.70 (parents). All groups agreed that the key changes to make adjustable designs more attractive were altering the round lenses to rectangular or oval shapes and adding curved earpieces for more stable wear. The thick frames of the adjustable designs were considered stylish, and children indicated they would wear them if the lens shape were modified. Conclusions Current adjustable lens designs are unattractive to Chinese children and their parents, though this study identified specific modifications which would make them more appealing. PMID:24594799
NASA Astrophysics Data System (ADS)
Yan, J. P.; Seidel, U.; Koutnik, J.
2012-11-01
The hydrodynamics of a reduced-scaled model of a radial pump-turbine is investigated under off-design operating conditions, involving runaway and "S-shape" turbine brake curve at low positive discharge. It is a low specific speed pump-turbine machine of Francis type with 9 impeller blades and 20 stay vanes as well as 20 guide vanes. The computational domain includes the entire water passage from the spiral casing inlet to the draft tube outlet. Completely structured hexahedral meshes generated by the commercial software ANSYS-ICEM are employed. The unsteady incompressible simulations are performed using the commercial code ANSYS-CFX13. For turbulence modeling the standard k-ε model is applied. The numerical results at different operating points are compared to the experimental results. The predicted pressure amplitude is in good agreement with the experimental data and the amplitude of normal force on impeller is in reasonable range. The detailed analysis reveals the onset of the flow instabilities when the machine is brought from a regular operating condition to runaway and turbine break mode. Furthermore, the rotating stall phenomena are well captured at runaway condition as well as low discharge operating condition with one stall cell rotating inside and around the impeller with about 70% of its frequency. Moreover, the rotating stall is found to be the effect of rotating flow separations developed in several consecutive impeller channels which lead to their blockage. The reliable simulation of S-curve characteristics in pump-turbines is a basic requirement for design and optimization at off-design operating conditions.
Design and analysis of grid stiffened fuselage panel with curved stiffeners
NASA Astrophysics Data System (ADS)
Hemanth, Bharath; Babu, N. C. Mahendra; Shivakumar, H. G.; Srikari, S.
2018-04-01
Designing and analyzing grid stiffened panel to understand the effect of stiffeners on stiffness of the panel is crucial in designing grid stiffened cylinder for fuselage application. Traditionally only straight stiffeners were used due to limited manufacturing capabilities and in recent years GSS with curved stiffeners have become a reality. The present work is on flat grid stiffened panel and the focus is to realize the change in stiffness by converting straight stiffeners in an isogrid panel to curved stiffeners. An isogrid stiffened panel is identified from literature for which experimental results were available and was considered for replacing straight stiffeners with curved stiffeners. Defining and designing the curve for curved stiffeners which can be used to replace straight stiffeners in isogrid pattern is crucial. FE model of the grid stiffened fuselage panel with isogrid pattern identified from the literature for which experimental data was available was developed and evaluated for stiffness. For the same panel, curved grid pattern to enhance stiffness of the panel was designed following existing design procedure. FE model of the grid stiffened fuselage panel with designed curved stiffeners was developed and evaluated for stiffness. It is established that the stiffness of panel can be increased by minimum of 2.82% to maximum of 11.93% by using curved stiffeners of particular curvature as a replacement for straight stiffeners in isogrid pattern with a slight mass penalty.
Parametric geometric model and shape optimization of an underwater glider with blended-wing-body
NASA Astrophysics Data System (ADS)
Sun, Chunya; Song, Baowei; Wang, Peng
2015-11-01
Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
NASA Astrophysics Data System (ADS)
Chen, Shaobo; Chen, Pingxiuqi; Shao, Qiliang; Basha Shaik, Nazeem; Xie, Jiafeng
2017-05-01
The elliptic curve cryptography (ECC) provides much stronger security per bits compared to the traditional cryptosystem, and hence it is an ideal role in secure communication in smart grid. On the other side, secure implementation of finite field multiplication over GF(2 m ) is considered as the bottle neck of ECC. In this paper, we present a novel obfuscation strategy for secure implementation of systolic field multiplier for ECC in smart grid. First, for the first time, we propose a novel obfuscation technique to derive a novel obfuscated systolic finite field multiplier for ECC implementation. Then, we employ the DNA cryptography coding strategy to obfuscate the field multiplier further. Finally, we obtain the area-time-power complexity of the proposed field multiplier to confirm the efficiency of the proposed design. The proposed design is highly obfuscated with low overhead, suitable for secure cryptosystem in smart grid.
Broadband Photometric Reverberation Mapping Analysis on SDSS-RM and Stripe 82 Quasars
NASA Astrophysics Data System (ADS)
Zhang, Haowen; Yang, Qian; Wu, Xue-Bing
2018-02-01
We modified the broadband photometric reverberation mapping (PRM) code, JAVELIN, and tested the availability to get broad-line region time delays that are consistent with the spectroscopic reverberation mapping (SRM) project SDSS-RM. The broadband light curves of SDSS-RM quasars produced by convolution with the system transmission curves were used in the test. We found that under similar sampling conditions (evenly and frequently sampled), the key factor determining whether the broadband PRM code can yield lags consistent with the SRM project is the flux ratio of the broad emission line to the reference continuum, which is in line with the previous findings. We further found a critical line-to-continuum flux ratio, about 6%, above which the mean of the ratios between the lags from PRM and SRM becomes closer to unity, and the scatter is pronouncedly reduced. We also tested our code on a subset of SDSS Stripe 82 quasars, and found that our program tends to give biased lag estimations due to the observation gaps when the R-L relation prior in Markov Chain Monte Carlo is discarded. The performance of the damped random walk (DRW) model and the power-law (PL) structure function model on broadband PRM were compared. We found that given both SDSS-RM-like or Stripe 82-like light curves, the DRW model performs better in carrying out broadband PRM than the PL model.
Flow range enhancement by secondary flow effect in low solidity circular cascade diffusers
NASA Astrophysics Data System (ADS)
Sakaguchi, Daisaku; Tun, Min Thaw; Mizokoshi, Kanata; Kishikawa, Daiki
2014-08-01
High-pressure ratio and wide operating range are highly required for compressors and blowers. The technical issue of the design is achievement of suppression of flow separation at small flow rate without deteriorating the efficiency at design flow rate. A numerical simulation is very effective in design procedure, however, cost of the numerical simulation is generally high during the practical design process, and it is difficult to confirm the optimal design which is combined with many parameters. A multi-objective optimization technique is the idea that has been proposed for solving the problem in practical design process. In this study, a Low Solidity circular cascade Diffuser (LSD) in a centrifugal blower is successfully designed by means of multi-objective optimization technique. An optimization code with a meta-model assisted evolutionary algorithm is used with a commercial CFD code ANSYS-CFX. The optimization is aiming at improving the static pressure coefficient at design point and at low flow rate condition while constraining the slope of the lift coefficient curve. Moreover, a small tip clearance of the LSD blade was applied in order to activate and to stabilize the secondary flow effect at small flow rate condition. The optimized LSD blade has an extended operating range of 114 % towards smaller flow rate as compared to the baseline design without deteriorating the diffuser pressure recovery at design point. The diffuser pressure rise and operating flow range of the optimized LSD blade are experimentally verified by overall performance test. The detailed flow in the diffuser is also confirmed by means of a Particle Image Velocimeter. Secondary flow is clearly captured by PIV and it spreads to the whole area of LSD blade pitch. It is found that the optimized LSD blade shows good improvement of the blade loading in the whole operating range, while at small flow rate the flow separation on the LSD blade has been successfully suppressed by the secondary flow effect.
The Triangle: a Multiprocessor Architecture for Fast Curve and Surface Generation.
1987-08-01
design , curves and surfaces, graphics hardware. 20...curves, B-splines, computer-aided geometric design ; curves and sur- faces, graphics hardware. (k 12). -/ .... This work was supported in part by the...34 Electronic Design , October 30, 1986. 21. M. A. Penna and R. R. Patterson, Projective Geometry and its Applications to Computer Graphics , Prentice-Hall, Englewood Cliffs, N.J., 1985. 70,e, 41100vr -~ ~ - -- --
Apsidal rotation in the eclipsing binary AG Persei
NASA Technical Reports Server (NTRS)
Koch, Robert H.; Woodward, Edith J.
1987-01-01
New three-filter light curves of AG Per are given. These yield times of minimum light in accord with the known rate of apsidal rotation but do not improve that rate. These light curves and all other published historical ones have been treated with the code EBOP and are shown to give largely consistent geometric and photometric parameters no matter which orientation of the orbit is displayed to the observer.
Advanced Vibration Analysis Tool Developed for Robust Engine Rotor Designs
NASA Technical Reports Server (NTRS)
Min, James B.
2005-01-01
The primary objective of this research program is to develop vibration analysis tools, design tools, and design strategies to significantly improve the safety and robustness of turbine engine rotors. Bladed disks in turbine engines always feature small, random blade-to-blade differences, or mistuning. Mistuning can lead to a dramatic increase in blade forced-response amplitudes and stresses. Ultimately, this results in high-cycle fatigue, which is a major safety and cost concern. In this research program, the necessary steps will be taken to transform a state-of-the-art vibration analysis tool, the Turbo- Reduce forced-response prediction code, into an effective design tool by enhancing and extending the underlying modeling and analysis methods. Furthermore, novel techniques will be developed to assess the safety of a given design. In particular, a procedure will be established for using natural-frequency curve veerings to identify ranges of operating conditions (rotational speeds and engine orders) in which there is a great risk that the rotor blades will suffer high stresses. This work also will aid statistical studies of the forced response by reducing the necessary number of simulations. Finally, new strategies for improving the design of rotors will be pursued.
A finite area scheme for shallow granular flows on three-dimensional surfaces
NASA Astrophysics Data System (ADS)
Rauter, Matthias
2017-04-01
Shallow granular flow models have become a popular tool for the estimation of natural hazards, such as landslides, debris flows and avalanches. The shallowness of the flow allows to reduce the three-dimensional governing equations to a quasi two-dimensional system. Three-dimensional flow fields are replaced by their depth-integrated two-dimensional counterparts, which yields a robust and fast method [1]. A solution for a simple shallow granular flow model, based on the so-called finite area method [3] is presented. The finite area method is an adaption of the finite volume method [4] to two-dimensional curved surfaces in three-dimensional space. This method handles the three dimensional basal topography in a simple way, making the model suitable for arbitrary (but mildly curved) topography, such as natural terrain. Furthermore, the implementation into the open source software OpenFOAM [4] is shown. OpenFOAM is a popular computational fluid dynamics application, designed so that the top-level code mimics the mathematical governing equations. This makes the code easy to read and extendable to more sophisticated models. Finally, some hints on how to get started with the code and how to extend the basic model will be given. I gratefully acknowledge the financial support by the OEAW project "beyond dense flow avalanches". Savage, S. B. & Hutter, K. 1989 The motion of a finite mass of granular material down a rough incline. Journal of Fluid Mechanics 199, 177-215. Ferziger, J. & Peric, M. 2002 Computational methods for fluid dynamics, 3rd edn. Springer. Tukovic, Z. & Jasak, H. 2012 A moving mesh finite volume interface tracking method for surface tension dominated interfacial fluid flow. Computers & fluids 55, 70-84. Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. 1998 A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in physics 12(6), 620-631.
Light Curves of the Type II-P Supernova SN 2017eaw: The First 200 Days
NASA Astrophysics Data System (ADS)
Tsvetkov, D. Yu.; Shugarov, S. Yu.; Volkov, I. M.; Pavlyuk, N. N.; Vozyakova, O. V.; Shatsky, N. I.; Nikiforova, A. A.; Troitsky, I. S.; Troitskaya, Yu. V.; Baklanov, P. V.
2018-05-01
We present the results of our UBVRI photometry for the type II-P supernova SN 2017eaw in NGC6946 obtained fromMay 14 to December 7, 2017, at several telescopes, including the 2.5-m telescope at the CaucasusHigh-Altitude Observatory of the SAIMSU. The dates andmagnitudes atmaximumlight and the light-curve parameters have been determined. The color evolution, extinction, and peak luminosity of SN 2017eaw are discussed. The results of our preliminary radiation-gasdynamic simulations of its light curves with the STELLA code describe satisfactorily the UBVRI observational data.
NASA Astrophysics Data System (ADS)
Yan, H.; Sun, N.; Wigmosta, M. S.; Hou, Z.
2017-12-01
There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in the snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In this study, we examined next-generation IDF (NG-IDF) curves with inclusion of snowmelt and ROS events to improve infrastructure design in snow-dominated regions. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 377 Snowpack Telemetry (SNOTEL) stations across the western United States with at least 30 years of high quality record. We found 38% of the stations were subject to under-design, many with significant underestimation of 100-year extreme events, where the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 121% due to snowmelt and ROS events. The regions with the greatest potential for under-design were in the Pacific Northwest, the Sierra Nevada, and the Middle and Southern Rockies. We also found the potential for over-design at 27% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in development of IDF curves for engineering design procedures in snow-dominated regions.
13 CFR 121.1103 - What are the procedures for appealing a NAICS code designation?
Code of Federal Regulations, 2010 CFR
2010-01-01
... appealing a NAICS code designation? 121.1103 Section 121.1103 Business Credit and Assistance SMALL BUSINESS... Determinations and Naics Code Designations § 121.1103 What are the procedures for appealing a NAICS code... code designation and applicable size standard must be served and filed within 10 calendar days after...
Simplified curve fits for the thermodynamic properties of equilibrium air
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.
1986-01-01
New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).
Enhancement of surface definition and gridding in the EAGLE code
NASA Technical Reports Server (NTRS)
Thompson, Joe F.
1991-01-01
Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
Salah, Wa'el; Sanchez del Rio, Manuel
2011-05-01
The layout and the optical performance of the SGM branch of the D09 bending-magnet beamline, under construction at SESAME, are presented. The beamline is based on the Dragon-type design and delivers photons over the spectral range 15-250 eV. One fixed entrance slit and a movable exit slit are used. The performance of the beamline has been characterized by calculating the mirror reflectivities and the grating efficiencies. The flux and resolution were calculated by ray-tracing using SHADOW. The grating diffraction efficiencies were calculated using the GRADIF code. The results and the overall shapes of the predicted curves are in reasonable agreement with those obtained using an analytical formula.
Geometry modeling and multi-block grid generation for turbomachinery configurations
NASA Technical Reports Server (NTRS)
Shih, Ming H.; Soni, Bharat K.
1992-01-01
An interactive 3D grid generation code, Turbomachinery Interactive Grid genERation (TIGER), was developed for general turbomachinery configurations. TIGER features the automatic generation of multi-block structured grids around multiple blade rows for either internal, external, or internal-external turbomachinery flow fields. Utilization of the Bezier's curves achieves a smooth grid and better orthogonality. TIGER generates the algebraic grid automatically based on geometric information provided by its built-in pseudo-AI algorithm. However, due to the large variation of turbomachinery configurations, this initial grid may not always be as good as desired. TIGER therefore provides graphical user interactions during the process which allow the user to design, modify, as well as manipulate the grid, including the capability of elliptic surface grid generation.
Small passenger car transmission test-Chevrolet 200 transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1980-01-01
The small passenger car transmission was tested to supply electric vehicle manufacturers with technical information regarding the performance of commerically available transmissions which would enable them to design a more energy efficient vehicle. With this information the manufacturers could estimate vehicle driving range as well as speed and torque requirements for specific road load performance characteristics. A 1979 Chevrolet Model 200 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. The transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. Torque, speed and efficiency curves map the complete performance characteristics for Chevrolet Model 200 transmission.
Plasma property and performance prediction for mercury ion thrusters
NASA Technical Reports Server (NTRS)
Longhurst, G. R.; Wilbur, P. J.
1979-01-01
The discharge chambers of mercury ion thrusters are modelled so the principal effects and processes which govern discharge plasma properties and thruster performance are described. The conservation relations for mass, charge and energy when applied to the Maxwellian electron population in the ion production region yield equations which may be made one-dimensional by the proper choice of coordinates. Solutions to these equations with the appropriate boundary conditions give electron density and temperature profiles which agree reasonably well with measurements. It is then possible to estimate plasma properties from thruster design data and those operating parameters which are directly controllable. By varying the operating parameter inputs to the computer code written to solve these equations, perfromance curves are obtained which agree quite well with measurements.
System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng Lin; Dong Hou; Zhihong Xu
2006-07-01
Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, justmore » can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is modeled by RELAP5 code, and its main control and protection system is duplicated by Matlab/Simulink. Some steady states and transients are calculated under control of these I and C systems, and the results are compared with the plant test curves. The application showed that it can do exact system simulation of NPPs by coupling RELAP5 and Matlab/Simulink. This paper will mainly focus on the coupling method, plant thermal-hydraulic model, main control logics, test and application results. (authors)« less
An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2007-01-01
An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Qualitatively different coding of symbolic and nonsymbolic numbers in the human brain.
Lyons, Ian M; Ansari, Daniel; Beilock, Sian L
2015-02-01
Are symbolic and nonsymbolic numbers coded differently in the brain? Neuronal data indicate that overlap in numerical tuning curves is a hallmark of the approximate, analogue nature of nonsymbolic number representation. Consequently, patterns of fMRI activity should be more correlated when the representational overlap between two numbers is relatively high. In bilateral intraparietal sulci (IPS), for nonsymbolic numbers, the pattern of voxelwise correlations between pairs of numbers mirrored the amount of overlap in their tuning curves under the assumption of approximate, analogue coding. In contrast, symbolic numbers showed a flat field of modest correlations more consistent with discrete, categorical representation (no systematic overlap between numbers). Directly correlating activity patterns for a given number across formats (e.g., the numeral "6" with six dots) showed no evidence of shared symbolic and nonsymbolic number-specific representations. Overall (univariate) activity in bilateral IPS was well fit by the log of the number being processed for both nonsymbolic and symbolic numbers. IPS activity is thus sensitive to numerosity regardless of format; however, the nature in which symbolic and nonsymbolic numbers are encoded is fundamentally different. © 2014 Wiley Periodicals, Inc.
Updates to building-code maps for the 2015 NEHRP recommended seismic provisions
Luco, Nicolas; Bachman, Robert; Crouse, C.B; Harris, James R.; Hooper, John D.; Kircher, Charles A.; Caldwell, Phillp; Rukstales, Kenneth S.
2015-01-01
With the 2014 update of the U.S. Geological Survey (USGS) National Seismic Hazard Model (NSHM) as a basis, the Building Seismic Safety Council (BSSC) has updated the earthquake ground motion maps in the National Earthquake Hazards Reduction Program (NEHRP) Recommended Seismic Provisions for New Buildings and Other Structures, with partial funding from the Federal Emergency Management Agency. Anticipated adoption of the updated maps into the American Society of Civil Engineers Minimum Design Loads for Building and Other Structures and the International Building and Residential Codes is underway. Relative to the ground motions in the prior edition of each of these documents, most of the updated values are within a ±20% change. The larger changes are, in most cases, due to the USGS NSHM updates, reasons for which are given in companion publications. In some cases, the larger changes are partly due to a BSSC update of the slope of the fragility curve that is used to calculate the risk-targeted ground motions, and/or the introduction by BSSC of a quantitative definition of “active faults” used to calculate deterministic ground motions.
Force-free electrodynamics in dynamical curved spacetimes
NASA Astrophysics Data System (ADS)
McWilliams, Sean
2015-04-01
We present results on our study of force-free electrodynamics in curved spacetimes. Specifically, we present several improvements to what has become the established set of evolution equations, and we apply these to study the nonlinear stability of analytically known force-free solutions for the first time. We implement our method in a new pseudo-spectral code built on top of the SpEC code for evolving dynamic spacetimes. Finally, we revisit these known solutions and attempt to clarify some interesting properties that render them analytically tractable. Finally, we preview some new work that similarly revisits the established approach to solving another problem in numerical relativity: the post-merger recoil from asymmetric gravitational-wave emission. These new results may have significant implications for the parameter dependence of recoils, and consequently on the statistical expectations for recoil velocities of merged systems.
NASA Technical Reports Server (NTRS)
Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.
1987-01-01
Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.
NASA Astrophysics Data System (ADS)
Kim, Byung Sik; Jeung, Se Jin; Lee, Dong Seop; Han, Woo Suk
2015-04-01
As the abnormal rainfall condition has been more and more frequently happen and serious by climate change and variabilities, the question whether the design of drainage system could be prepared with abnormal rainfall condition or not has been on the rise. Usually, the drainage system has been designed by rainfall I-D-F (Intensity-Duration-Frequency) curve with assumption that I-D-F curve is stationary. The design approach of the drainage system has limitation not to consider the extreme rainfall condition of which I-D-F curve is non-stationary by climate change and variabilities. Therefore, the assumption that the I-D-F curve is stationary to design drainage system maybe not available in the climate change period, because climate change has changed the characteristics of extremes rainfall event to be non-stationary. In this paper, design rainfall by rainfall duration and non-stationary I-D-F curve are derived by the conditional GEV distribution considering non-stationary of rainfall characteristics. Furthermore, the effect of designed peak flow with increase of rainfall intensity was analyzed by distributed rainfall-runoff model, S-RAT(Spatial Runoff Assessment Tool). Although there are some difference by rainfall duration, the traditional I-D-F curves underestimates the extreme rainfall events for high-frequency rainfall condition. As a result, this paper suggest that traditional I-D-F curves could not be suitable for the design of drainage system under climate change condition. Keywords : Drainage system, Climate Change, non-stationary, I-D-F curves This research was supported by a grant 'Development of multi-function debris flow control technique considering extreme rainfall event' [NEMA-Natural-2014-74] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of KOREA
Sensitivity curves for searches for gravitational-wave backgrounds
NASA Astrophysics Data System (ADS)
Thrane, Eric; Romano, Joseph D.
2013-12-01
We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.
VizieR Online Data Catalog: WASP-22, WASP-41, WASP-42, WASP-55 (Southworth+, 2016)
NASA Astrophysics Data System (ADS)
Southworth, J.; Tregloan-Reed, J.; Andersen, M. I.; Calchi Novati, S.; Ciceri, S.; Colque, J. P.; D'Ago, G.; Dominik, M.; Evans, D. F.; Gu, S.-H.; Herrera-Cordova, A.; Hinse, T. C.; Jorgensen, U. G.; Juncher, D.; Kuffmeier, M.; Mancini, L.; Peixinho, N.; Popovas, A.; Rabus, M.; Skottfelt, J.; Tronsgaard, R.; Unda-Sanzana, E.; Wang, X.-B.; Wertz, O.; Alsubai, K. A.; Andersen, J. M.; Bozza, V.; Bramich, D. M.; Burgdorf, M.; Damerdji, Y.; Diehl, C.; Elyiv, A.; Figuera Jaimes, R.; Haugbolle, T.; Hundertmark, M.; Kains, N.; Kerins, E.; Korhonen, H.; Liebig, C.; Mathiasen, M.; Penny, M. T.; Rahvar, S.; Scarpetta, G.; Schmidt, R. W.; Snodgrass, C.; Starkey, D.; Surdej, J.; Vilela, C.; von Essen, C.; Wang, Y.
2018-05-01
17 light curves of transits of the extrasolar planetary systems WASP-22, WASP-41, WASP-42 and WASP-55 are presented. 13 of the light curves were obtained using the Danish 1.54m telescope at ESO La Silla, Chile, in the Bessell R or Bessell I passbands. The other 4 light curves were obtained using the 84cm telescope at Observatorio Cerro Armazones, Chile, using either an R filter or no filter. The errorbars for each transit have been scaled so the best-fitting model (obtained using the JKTEBOP code) has a reduced chi-squared value of 1.0. (4 data files).
National Underground Mines Inventory
1983-10-01
system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code
Designing an efficient LT-code with unequal error protection for image transmission
NASA Astrophysics Data System (ADS)
S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.
2015-10-01
The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression recommended by CCSDS. In fact, to design a LT-code with an unequal error protection, the bit stream produced by the algorithm recommended by CCSDS must be partitioned in M disjoint sets of bits. Using the weighted approach, the LT-code produces M different failure probabilities for each set of bits, p1, ..., pM leading to a total probability of failure, p which is an average of p1, ..., pM. In general, the parameters of the LT-code with unequal error protection is chosen using a heuristic procedure. In this work, we analyze the problem of choosing the LT-code parameters to optimize two figure of merits: (a) the probability of achieving a minimum acceptable PSNR, and (b) the mean of PSNR, given that the minimum acceptable PSNR has been achieved. Given the rate-distortion curve achieved by CCSDS recommended algorithm, this work establishes a closed form of the mean of PSNR (given that the minimum acceptable PSNR has been achieved) as a function of p1, ..., pM. The main contribution of this work is the study of a criteria to select the parameters p1, ..., pM to optimize the performance of image transmission.
NASA Astrophysics Data System (ADS)
Srinil, Narakorn; Ma, Bowen; Zhang, Licong
2018-05-01
This study is motivated by an industrial need to better understand the vortex-induced vibration (VIV) of a curved structure subject to current flows with varying directions whose data for model calibration and validation are lacking. In this paper, new experimental investigations on the two-degree-of-freedom in-plane/out-of-plane VIV of a rigid curved circular cylinder immersed in steady and uniform free-stream flows are presented. The principal objective is to examine how the approaching flow direction versus the cylinder curvature plane affects cross-flow and in-line VIV and the associated hydrodynamic properties. This is achieved by testing the curved cylinder in 3 different flow orientations comprising the parallel flows aligned with the curvature vertical plane in convex and concave configurations, and the flows perpendicular to the curvature plane. The case of varying flow velocities in a subcritical flow range with a maximum Reynolds number of about 50,000 is considered for the curved cylinder with a low mass ratio and damping ratio. Experimental results are presented and discussed in terms of the cylinder response amplitudes, inclination angles, mean displacements, motion trajectories, oscillation frequencies, hydrodynamic forces, relative phases, fluid excitation and added inertia coefficients. Comparisons with other experimental results of curved and straight cylinder VIV are also presented. The experiments highlight the important effects of cylinder curvature versus flow orientation on the combined cross-flow/in-line VIV. The maximum (minimum) responses occur in the perpendicular (convex) flow case whereas the extended lower-branch responses occur in the concave flow case. For perpendicular flows, some meaningful features are observed, including the appearances of cross-flow mean displacements and asymmetric eight-shaped motion trajectories due to multiple 2:1:1 resonances where two out-of-plane and one in-plane dominant frequencies are simultaneously excited. Overall VIV phenomena caused by the system asymmetry should be recognised in a prediction model and design codes to capture the combined effects of curved configuration and approaching flow direction.
SU-E-QI-06: Design and Initial Validation of a Precise Capillary Phantom to Test Perfusion Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, R; Iacobucci, G; Khobragade, P
2014-06-15
Purpose: To design a precise perfusion phantom mimicking capillaries of the brain vasculature which could be used to test various perfusion protocols and algorithms which generate perfusion maps. Methods: A perfusion phantom was designed in Solidworks and built using additive manufacturing. The phantom was an overall cylindrical shape of diameter and height 20mm and containing capillaries of 200μm or 300μm which were parallel and in contact making up the inside volume where flow was allowed. We created a flow loop using a peristaltic pump and contrast agent was injected manually. Digital Subtraction Angiographic images and low contrast images with conemore » beam CT were acquired after the contrast was injected. These images were analyzed by our own code in LabVIEW software and Time-Density Curve, MTT and TTP was calculated. Results: Perfused area was visible in the cone beam CT images; however, individual capillaries were not distinguishable. The Time-Density Curve acquired was accurate, sensitive and repeatable. The parameters MTT, and TTP offered by the phantom were very sensitive to slight changes in the TDC shape. Conclusion: We have created a robust calibrating model for evaluation of existing perfusion data analysis systems. This approach is extremely sensitive to changes in the flow due to the high temporal resolution and could be used as a golden standard to assist developers in calibrating and testing of imaging perfusion systems and software algorithms. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less
Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557
Hyun, Seung Won; Wong, Weng Kee
2015-11-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.
Close-in Blast Waves from Spherical Charges*
NASA Astrophysics Data System (ADS)
Howard, William; Kuhl, Allen
2011-06-01
We study the close-in blast waves created by the detonation of spherical high explosives (HE) charges, via numerical simulations with our Arbitrary-Lagrange-Eulerian (ALE3D) code. We used a finely-resolved, fixed Eulerian 2-D mesh (200 μm per cell) to capture the detonation of the charge, the blast wave propagation in air, and the reflection of the blast wave from an ideal surface. The thermodynamic properties of the detonation products and air were specified by the Cheetah code. A programmed-burn model was used to detonate the charge at a rate based on measured detonation velocities. The results were analyzed to evaluate the: (i) free air pressure-range curves: Δps (R) , (ii) free air impulse curves, (iii) reflected pressure-range curves, and (iv) reflected impulse-range curves. A variety of explosives were studied. Conclusions are: (i) close-in (R < 10 cm /g 1 / 3) , each explosive had its own (unique) blast wave (e.g., Δps (R , HE) ~ a /Rn , where n is different for each explosive); (ii) these close-in blast waves do not scale with the ``Heat of Detonation'' of the explosive (because close-in, there is not enough time to fully couple the chemical energy to the air via piston work); (iii) instead they are related to the detonation conditions inside the charge. Scaling laws will be proposed for such close-in blast waves.
Broadband Photometric Reverberation Mapping Analysis on SDSS-RM and Stripe 82 Quasars
NASA Astrophysics Data System (ADS)
Zhang, Haowen; Yang, Qian; Wu, Xuebing; Shen, Yue
2018-01-01
We extended the broadband photometric reverberation mapping (PRM) code, JAVELIN and test the availability to get broad line region (BLR) time delays that are consistent with spectroscopic reverberation mapping (SRM) projects. Broadband light curves of SDSS-RM quasars produced by convolution with system transmission curve were used in the test. We find that under similar sampling conditions (evenly and frequently sampled), the key factor determining whether the broadband PRM code can yield lags consistent with spectroscopic projects is the flux ratio of line to the reference continuum, which is in line with the findings in Zu et al. (2016). We further find a crucial line-to-continuum flux ratio, above which the mean of the ratios between the lags from PRM and SRM becomes closer to unity, and the scatter is pronouncedly reduced. Based on this flux ratio criteria, we selected some of the quasars from Hernitschek et al. (2015) and carry out broadband PRM on this subset. The performance of damped random walking (DRW) model and power-law (PL) structure function model on broadband PRM are compared using mock light curves with high, even cadences and low, uneven ones, respectively. We find that DRW model performs better in carrying out broadband PRM than PL model both for high and low cadence light curves with other data qualities similar to SDSS-RM quasars.
Comparative analysis of design codes for timber bridges in Canada, the United States, and Europe
James Wacker; James (Scott) Groenier
2010-01-01
The United States recently completed its transition from the allowable stress design code to the load and resistance factor design (LRFD) reliability-based code for the design of most highway bridges. For an international perspective on the LRFD-based bridge codes, a comparative analysis is presented: a study addressed national codes of the United States, Canada, and...
Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations
NASA Astrophysics Data System (ADS)
McDonnell, Mark D.; Stocks, Nigel G.
2008-08-01
A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon’s mutual information and Fisher information, and the optimality of Jeffrey’s prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.
UTM, a universal simulator for lightcurves of transiting systems
NASA Astrophysics Data System (ADS)
Deeg, Hans
2009-02-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. Applications of UTM to date have been mainly in the generation of light-curves for the testing of detection algorithms. For the preparation of such test for the Corot Mission, a special version has been used to generate multicolour light-curves in Corot's passbands. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
2017-10-16
parameter A3 of the design curve is negative for 24S-T3. The design curve shown in Figure 41 for Model S2 is comparable with the design curve shown...Approaches to Flaw-Tolerant Design and Certification of Airframe Components Report on NACA Data – Task 6 Ricardo Actis and Barna Szabó Engineering...Software Research and Development, Inc. 111 West Port Plaza, Suite 825 St. Louis, MO 63146 September 26, 2017 Revised: October 16, 2017
Diversity of coding profiles of mechanoreceptors in glabrous skin of kittens.
Gibson, J M; Beitel, R E; Welker, W
1975-03-21
We examined stimulul-response (S-R) profiles of 35 single mechanoreceptive afferent units having small receptive fields in glabrous forepaw skin of 24 anesthetized domestic kittens. Single unit activity was recorded with tungsten microelectrodes from cervical dorsal root ganglia. The study was designed to be as quantitatively descriptive as possible. We indented each unit's receptive field with a broad battery of simple, carefully controlled stimuli whose major parameters, including amplitude, velocity, acceleration, duration, and interstimulus interval were systematically varied. Stimuli were delivered by a small probe driven by a feedback-controlled axial displacement generator. Single unit discharge data were analyzed by a variety of direct and derived measures including dot patterns, peristimulus histograms, instantaneous and mean instantaneous firing rates, tuning curves, thresholds for amplitude and velocity, adaptation rates, dynamic and static sensitivities, and others. We found that with respect to any of the S-R transactions examined, the properties of our sample of units were continuously and broadly distributed. Any one unit might exhibit either a slow or rapid rate of adaptation, or might superficially appear to preferentially code a single stimulus parameter such as amplitude or velocity. But when the entire range of responsiveness of units to the entire stimulus battery was surveyed by a variety of analytic techniques, we were unable to find any justifiable basis for designation of discrete categories of S-R profiles. Intermediate response types were always found, and in general, all units were both broadly tuned and capable of responding to integrals of several stimulus parameters, our data argue against the usefulness of evaluating a unit's S-R coding capabilities by means of a limited ste of stimulation of response analysis procedures.
NASA Astrophysics Data System (ADS)
Yan, Hongxiang; Sun, Ning; Wigmosta, Mark; Skaggs, Richard; Hou, Zhangshuan; Leung, Ruby
2018-02-01
There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial overestimation/underestimation of design basis events and subsequent overdesign/underdesign of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to underdesign, many with significant underestimation of 100 year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for underdesign were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for overdesign at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.
Incorporating Manual and Autonomous Code Generation
NASA Technical Reports Server (NTRS)
McComas, David
1998-01-01
Code can be generated manually or using code-generated software tools, but how do you interpret the two? This article looks at a design methodology that combines object-oriented design with autonomic code generation for attitude control flight software. Recent improvements in space flight computers are allowing software engineers to spend more time engineering the applications software. The application developed was the attitude control flight software for an astronomical satellite called the Microwave Anisotropy Probe (MAP). The MAP flight system is being designed, developed, and integrated at NASA's Goddard Space Flight Center. The MAP controls engineers are using Integrated Systems Inc.'s MATRIXx for their controls analysis. In addition to providing a graphical analysis for an environment, MATRIXx includes an autonomic code generation facility called AutoCode. This article examines the forces that shaped the final design and describes three highlights of the design process: (1) Defining the manual to autonomic code interface; (2) Applying object-oriented design to the manual flight code; (3) Implementing the object-oriented design in C.
Design, Fabrication and Test of Composite Curved Frames for Helicopter Fuselage Structure
NASA Technical Reports Server (NTRS)
Lowry, D. W.; Krebs, N. E.; Dobyns, A. L.
1984-01-01
Aspects of curved beam effects and their importance in designing composite frame structures are discussed. The curved beam effect induces radial flange loadings which in turn causes flange curling. This curling increases the axial flange stresses and induces transverse bending. These effects are more important in composite structures due to their general inability to redistribute stresses by general yielding, such as in metal structures. A detailed finite element analysis was conducted and used in the design of composite curved frame specimens. Five specimens were statically tested and compared with predicted and test strains. The curved frame effects must be accurately accounted for to avoid premature fracture; finite element methods can accurately predict most of the stresses and no elastic relief from curved beam effects occurred in the composite frames tested. Finite element studies are presented for comparative curved beam effects on composite and metal frames.
Conceptual Design of a 100kW Energy Integrated Type Bi-Directional Tidal Current Turbine
NASA Astrophysics Data System (ADS)
Kim, Ki Pyoung; Ahmed, M. Rafiuddin; Lee, Young Ho
2010-06-01
The development of a tidal current turbine that can extract maximum energy from the tidal current will be extremely beneficial for supplying continuous electric power. The present paper presents a conceptual design of a 100kW energy integrated type tidal current turbine for tidal power generation. The instantaneous power density of a flowing fluid incident on an underwater turbine is proportional to the cubic power of current velocity which is approximately 2.5m/s. A cross-flow turbine, provided with a nozzle and a diffuser, is designed and analyzed. The potential advantages of ducted and diffuser-augmented turbines were taken into consideration in order to achieve higher output at a relatively low speed. This study looks at a cross-flow turbine system which is placed in an augmentation channel to generate electricity bi-directionally. The compatibility of this turbine system is verified using a commercial CFD code, ANSYSCFX. This paper presents the results of the numerical analysis in terms of pressure, streaklines, velocity vectors and performance curves for energy integrated type bi-directional tidal current turbine (BDT) with augmentation.
NASA Astrophysics Data System (ADS)
Sinha, Gautam
2018-02-01
A concept is presented to design magnets using cylindrical-shaped permanent-magnet blocks, where various types of magnetic fields can be produced by either rotating or varying the size of the magnetic blocks within a given mechanical structure. A general method is introduced to calculate the 3D magnetic field produced by a set of permanent magnets. An analytical expression of the 2D field and the condition to generate various magnetic fields like dipole, quadrupole, and sextupole are derived. Using the 2D result as a starting point, a computer code is developed to get the optimum orientation of the magnets to obtain the user-specific target field profile over a given volume in 3D. Designs of two quadrupole magnets are presented, one using 12 and the other using 24 permanent-magnet blocks. Variation of the quadrupole strength is achieved using tuning coils of a suitable current density and specially designed end tubes. A new concept is introduced to reduce the integrated quadrupole field strength by inserting two hollow cylindrical tubes made of iron, one at each end. This will not affect the field gradient at the center but reduce the integrated field strength by shielding the magnetic field near the ends where the tubes are inserted. The advantages of this scheme are that it is easy to implement, the magnetic axis will not shift, and it will prevent interference with nearby devices. Around 40% integrated field variation is achieved using this method in the present example. To get a realistic estimation of the field quality, a complete 3D model using a nonlinear B -H curve is also studied using a finite-element-based computer code. An example to generate around an 80 T /m quadrupole field gradient is also presented.
NASA Astrophysics Data System (ADS)
Dury, Trevor V.
2006-06-01
The ESS and SINQ Heat Emitting Temperature Sensing Surface (HETSS) mercury experiments have been used to validate the Computational Fluid Dynamics (CFD) code CFX-4 employed in designing the lower region of the international liquid metal cooled MEGAPIE target, to be installed at SINQ, PSI, in 2006. Conclusions were drawn on the best turbulence models and degrees of mesh refinement to apply, and a new CFD model of the MEGAPIE geometry was made, based on the CATIA CAD design of the exact geometry constructed. This model contained the fill and drain tubes as well as the bypass feed duct, with the differences in relative vertical length due to thermal expansion being considered between these tubes and the window. Results of the mercury experiments showed that CFD calculations can be trusted to give peak target window temperature under normal operational conditions to within about ±10%. The target nozzle actually constructed varied from the theoretical design model used for CFD due to the need to apply more generous separation distances between the nozzle and the window. In addition, the bypass duct contraction approaching the nozzle exit was less sharp compared with earlier designs. Both of these changes modified the bypass jet penetration and coverage of the heated window zone. Peak external window temperature with a 1.4 mA proton beam and steady-state operation is now predicted to be 375 °C, with internal temperature 354.0 °C (about 32 °C above earlier predictions). Increasing bypass flow from 2.5 to 3.0 kg/s lowers these peak temperatures by about 12 °C. Stress analysis still needs to be made, based on these thermal data.
Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaCava, W.; Guo, Y.; Van Dam, J.
This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurementsmore » will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.« less
\\Space: A new code to estimate \\temp, \\logg, and elemental abundances
NASA Astrophysics Data System (ADS)
Boeche, C.
2016-09-01
\\Space is a FORTRAN95 code that derives stellar parameters and elemental abundances from stellar spectra. To derive these parameters, \\Space does not measure equivalent widths of lines nor it uses templates of synthetic spectra, but it employs a new method based on a library of General Curve-Of-Growths. To date \\Space works on the wavelength range 5212-6860 Å and 8400-8921 Å, and at the spectral resolution R=2000-20000. Extensions of these limits are possible. \\Space is a highly automated code suitable for application to large spectroscopic surveys. A web front end to this service is publicly available at http://dc.g-vo.org/SP_ACE together with the library and the binary code.
Analytic reflected light curves for exoplanets
NASA Astrophysics Data System (ADS)
Haggard, Hal M.; Cowan, Nicolas B.
2018-07-01
The disc-integrated reflected brightness of an exoplanet changes as a function of time due to orbital and rotational motions coupled with an inhomogeneous albedo map. We have previously derived analytic reflected light curves for spherical harmonic albedo maps in the special case of a synchronously rotating planet on an edge-on orbit (Cowan, Fuentes & Haggard). In this paper, we present analytic reflected light curves for the general case of a planet on an inclined orbit, with arbitrary spin period and non-zero obliquity. We do so for two different albedo basis maps: bright points (δ-maps), and spherical harmonics (Y_ l^m-maps). In particular, we use Wigner D-matrices to express an harmonic light curve for an arbitrary viewing geometry as a non-linear combination of harmonic light curves for the simpler edge-on, synchronously rotating geometry. These solutions will enable future exploration of the degeneracies and information content of reflected light curves, as well as fast calculation of light curves for mapping exoplanets based on time-resolved photometry. To these ends, we make available Exoplanet Analytic Reflected Lightcurves, a simple open-source code that allows rapid computation of reflected light curves.
Xie, Guosen; Mo, Zhongxi
2011-01-21
In this article, we introduce three 3D graphical representations of DNA primary sequences, which we call RY-curve, MK-curve and SW-curve, based on three classifications of the DNA bases. The advantages of our representations are that (i) these 3D curves are strictly non-degenerate and there is no loss of information when transferring a DNA sequence to its mathematical representation and (ii) the coordinates of every node on these 3D curves have clear biological implication. Two applications of these 3D curves are presented: (a) a simple formula is derived to calculate the content of the four bases (A, G, C and T) from the coordinates of nodes on the curves; and (b) a 12-component characteristic vector is constructed to compare similarity among DNA sequences from different species based on the geometrical centers of the 3D curves. As examples, we examine similarity among the coding sequences of the first exon of beta-globin gene from eleven species and validate similarity of cDNA sequences of beta-globin gene from eight species. Copyright © 2010 Elsevier Ltd. All rights reserved.
OGLE14-073 - a promising pair-instability supernova candidate
NASA Astrophysics Data System (ADS)
Kozyreva, Alexandra; Kromer, Markus; Noebauer, Ulrich M.; Hirschi, Raphael
2018-05-01
The recently discovered bright type II supernova OGLE14-073 evolved very slowly. The light curve rose to maximum for 90 days from discovery and then declined at a rate compatible with the radioactive decay of 56Co. In this study, we show that a pair-instability supernova is a plausible mechanism for this event. We calculate explosion models and light curves with the radiation hydrodynamics code STELLA starting from two MZAMS = 150 M⊙, Z=0.001 progenitors. We obtain satisfactory fits to OGLE14-073 broadband light curves by including additional 56Ni in the centre of the models and mixing hydrogen down into the inner layers of the ejecta to a radial mass coordinate of 10 M⊙. The extra 56Ni required points to a slightly more massive progenitor star. The mixing of hydrogen could be due to large scale mixing during the explosion. We also present synthetic spectra for our models simulated with the Monte Carlo radiative transfer code ARTIS. The synthetic spectra reproduce the main features of the observed spectra of OGLE14-073. We conclude that OGLE14-073 is one of the most promising candidates for a pair-instability explosion.
Seismic Hazard analysis of Adjaria Region in Georgia
NASA Astrophysics Data System (ADS)
Jorjiashvili, Nato; Elashvili, Mikheil
2014-05-01
The most commonly used approach to determining seismic-design loads for engineering projects is probabilistic seismic-hazard analysis (PSHA). The primary output from a PSHA is a hazard curve showing the variation of a selected ground-motion parameter, such as peak ground acceleration (PGA) or spectral acceleration (SA), against the annual frequency of exceedance (or its reciprocal, return period). The design value is the ground-motion level that corresponds to a preselected design return period. For many engineering projects, such as standard buildings and typical bridges, the seismic loading is taken from the appropriate seismic-design code, the basis of which is usually a PSHA. For more important engineering projects— where the consequences of failure are more serious, such as dams and chemical plants—it is more usual to obtain the seismic-design loads from a site-specific PSHA, in general, using much longer return periods than those governing code based design. Calculation of Probabilistic Seismic Hazard was performed using Software CRISIS2007 by Ordaz, M., Aguilar, A., and Arboleda, J., Instituto de Ingeniería, UNAM, Mexico. CRISIS implements a classical probabilistic seismic hazard methodology where seismic sources can be modelled as points, lines and areas. In the case of area sources, the software offers an integration procedure that takes advantage of a triangulation algorithm used for seismic source discretization. This solution improves calculation efficiency while maintaining a reliable description of source geometry and seismicity. Additionally, supplementary filters (e.g. fix a sitesource distance that excludes from calculation sources at great distance) allow the program to balance precision and efficiency during hazard calculation. Earthquake temporal occurrence is assumed to follow a Poisson process, and the code facilitates two types of MFDs: a truncated exponential Gutenberg-Richter [1944] magnitude distribution and a characteristic magnitude distribution [Youngs and Coppersmith, 1985]. Notably, the software can deal with uncertainty in the seismicity input parameters such as maximum magnitude value. CRISIS offers a set of built-in GMPEs, as well as the possibility of defining new ones by providing information in a tabular format. Our study shows that in case of Ajaristkali HPP study area, significant contribution to Seismic Hazard comes from local sources with quite low Mmax values, thus these two attenuation lows give us quite different PGA and SA values.
Kimura, Yasumasa; Soma, Takahiro; Kasahara, Naoko; Delobel, Diane; Hanami, Takeshi; Tanaka, Yuki; de Hoon, Michiel J L; Hayashizaki, Yoshihide; Usui, Kengo; Harbers, Matthias
2016-01-01
Analytical PCR experiments preferably use internal probes for monitoring the amplification reaction and specific detection of the amplicon. Such internal probes have to be designed in close context with the amplification primers, and may require additional considerations for the detection of genetic variations. Here we describe Edesign, a new online and stand-alone tool for designing sets of PCR primers together with an internal probe for conducting quantitative real-time PCR (qPCR) and genotypic experiments. Edesign can be used for selecting standard DNA oligonucleotides like for instance TaqMan probes, but has been further extended with new functions and enhanced design features for Eprobes. Eprobes, with their single thiazole orange-labelled nucleotide, allow for highly sensitive genotypic assays because of their higher DNA binding affinity as compared to standard DNA oligonucleotides. Using new thermodynamic parameters, Edesign considers unique features of Eprobes during primer and probe design for establishing qPCR experiments and genotyping by melting curve analysis. Additional functions in Edesign allow probe design for effective discrimination between wild-type sequences and genetic variations either using standard DNA oligonucleotides or Eprobes. Edesign can be freely accessed online at http://www.dnaform.com/edesign2/, and the source code is available for download.
Kasahara, Naoko; Delobel, Diane; Hanami, Takeshi; Tanaka, Yuki; de Hoon, Michiel J. L.; Hayashizaki, Yoshihide; Usui, Kengo; Harbers, Matthias
2016-01-01
Analytical PCR experiments preferably use internal probes for monitoring the amplification reaction and specific detection of the amplicon. Such internal probes have to be designed in close context with the amplification primers, and may require additional considerations for the detection of genetic variations. Here we describe Edesign, a new online and stand-alone tool for designing sets of PCR primers together with an internal probe for conducting quantitative real-time PCR (qPCR) and genotypic experiments. Edesign can be used for selecting standard DNA oligonucleotides like for instance TaqMan probes, but has been further extended with new functions and enhanced design features for Eprobes. Eprobes, with their single thiazole orange-labelled nucleotide, allow for highly sensitive genotypic assays because of their higher DNA binding affinity as compared to standard DNA oligonucleotides. Using new thermodynamic parameters, Edesign considers unique features of Eprobes during primer and probe design for establishing qPCR experiments and genotyping by melting curve analysis. Additional functions in Edesign allow probe design for effective discrimination between wild-type sequences and genetic variations either using standard DNA oligonucleotides or Eprobes. Edesign can be freely accessed online at http://www.dnaform.com/edesign2/, and the source code is available for download. PMID:26863543
A product of independent beta probabilities dose escalation design for dual-agent phase I trials.
Mander, Adrian P; Sweeting, Michael J
2015-04-15
Dual-agent trials are now increasingly common in oncology research, and many proposed dose-escalation designs are available in the statistical literature. Despite this, the translation from statistical design to practical application is slow, as has been highlighted in single-agent phase I trials, where a 3 + 3 rule-based design is often still used. To expedite this process, new dose-escalation designs need to be not only scientifically beneficial but also easy to understand and implement by clinicians. In this paper, we propose a curve-free (nonparametric) design for a dual-agent trial in which the model parameters are the probabilities of toxicity at each of the dose combinations. We show that it is relatively trivial for a clinician's prior beliefs or historical information to be incorporated in the model and updating is fast and computationally simple through the use of conjugate Bayesian inference. Monotonicity is ensured by considering only a set of monotonic contours for the distribution of the maximum tolerated contour, which defines the dose-escalation decision process. Varied experimentation around the contour is achievable, and multiple dose combinations can be recommended to take forward to phase II. Code for R, Stata and Excel are available for implementation. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity
NASA Astrophysics Data System (ADS)
Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md
2017-08-01
This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.
Performance characteristics of the Cooper PC-9 centrifugal compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, R.E.; Neely, R.F.
1988-06-30
Mathematical performance modeling of the PC-9 centrifugal compressor has been completed. Performance characteristics curves have never been obtained for them in test loops with the same degree of accuracy as for the uprated axial compressors and, consequently, computer modeling of the top cascade and purge cascades has been very difficult and of limited value. This compressor modeling work has been carried out in an attempt to generate data which would more accurately define the compressor's performance and would permit more accurate cascade modeling. A computer code, COMPAL, was used to mathematically model the PC-9 performance with variations in gas composition,more » flow ratios, pressure ratios, speed and temperature. The results of this effort, in the form of graphs, with information about the compressor and the code, are the subject of this report. Compressor characteristic curves are featured. 13 figs.« less
UTM: Universal Transit Modeller
NASA Astrophysics Data System (ADS)
Deeg, Hans J.
2014-12-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Incremental triangulation by way of edge swapping and local optimization
NASA Technical Reports Server (NTRS)
Wiltberger, N. Lyn
1994-01-01
This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Strategies to Save 50% Site Energy in Grocery and General Merchandise Stores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirsch, A.; Hale, E.; Leach, M.
2011-03-01
This paper summarizes the methodology and main results of two recently published Technical Support Documents. These reports explore the feasibility of designing general merchandise and grocery stores that use half the energy of a minimally code-compliant building, as measured on a whole-building basis. We used an optimization algorithm to trace out a minimum cost curve and identify designs that satisfy the 50% energy savings goal. We started from baseline building energy use and progressed to more energy-efficient designs by sequentially adding energy design measures (EDMs). Certain EDMs figured prominently in reaching the 50% energy savings goal for both building types:more » (1) reduced lighting power density; (2) optimized area fraction and construction of view glass or skylights, or both, as part of a daylighting system tuned to 46.5 fc (500 lux); (3) reduced infiltration with a main entrance vestibule or an envelope air barrier, or both; and (4) energy recovery ventilators, especially in humid and cold climates. In grocery stores, the most effective EDM, which was chosen for all climates, was replacing baseline medium-temperature refrigerated cases with high-efficiency models that have doors.« less
NASA Technical Reports Server (NTRS)
Hasan, Hashima (Technical Monitor); Kirby, K.; Babb, J.; Yoshino, K.
2005-01-01
We report on progress made in a joint program of theoretical and experimental research to study the line-broadening of alkali atom resonance lines due to collisions with species such as helium and molecular hydrogen. Accurate knowledge of the line profiles of Na and K as a function of temperature and pressure will allow such lines to serve as valuable diagnostics of the atmospheres of brown dwarfs and extra-solar giant planets. A new experimental apparatus has been designed, built and tested over the past year, and we are poised to begin collecting data on the first system of interest, the potassium resonance lines perturbed by collisions with helium. On the theoretical front, calculations of line-broadening due to sodium collisions with helium are nearly complete, using accurate molecular potential energy curves and transition moments just recently computed for this system. In addition we have completed calculations of the three relevant potential energy curves and associated transition moments for K - He, using the MOLPRO quantum chemistry codes. Currently, calculations of the potential surfaces describing K-H2 are in progress.
Computer user's manual for a generalized curve fit and plotting program
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.
1973-01-01
A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)
1998-01-01
Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.
HydroApps: An R package for statistical simulation to use in regional analysis
NASA Astrophysics Data System (ADS)
Ganora, D.
2013-12-01
The HydroApps package is a newborn R extension initially developed to support the use of a recent model for flood frequency estimation developed for applications in Northwestern Italy; it also contains some general tools for regional analyses and can be easily extended to include other statistical models. The package is currently at an experimental level of development. The HydroApps is a corollary of the SSEM project for regional flood frequency analysis, although it was developed independently to support various instances of regional analyses. Its aim is to provide a basis for interplay between statistical simulation and practical operational use. In particular, the main module of the package deals with the building of the confidence bands of flood frequency curves expressed by means of their L-moments. Other functions include pre-processing and visualization of hydrologic time series, analysis of the optimal design-flood under uncertainty, but also tools useful in water resources management for the estimation of flow duration curves and their sensitivity to water withdrawals. Particular attention is devoted to the code granularity, i.e. the level of detail and aggregation of the code: a greater detail means more low-level functions, which entails more flexibility but reduces the ease of use for practical use. A balance between detail and simplicity is necessary and can be resolved with appropriate wrapping functions and specific help pages for each working block. From a more general viewpoint, the package has not really and user-friendly interface, but runs on multiple operating systems and it's easy to update, as many other open-source projects., The HydroApps functions and their features are reported in order to share ideas and materials to improve the ';technological' and information transfer between scientist communities and final users like policy makers.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krick, M.S.; Harker, W.C.; Rinard, P.M.
1998-12-01
Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.
Automating the generation of finite element dynamical cores with Firedrake
NASA Astrophysics Data System (ADS)
Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas
2017-04-01
The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.
System Design for FEC in Aeronautical Telemetry
2012-03-12
rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code
NASA Astrophysics Data System (ADS)
Moignier, Cyril; Tromson, Dominique; de Marzi, Ludovic; Marsolat, Fanny; García Hernández, Juan Carlos; Agelou, Mathieu; Pomorski, Michal; Woo, Romuald; Bourbotte, Jean-Michel; Moignau, Fabien; Lazaro, Delphine; Mazal, Alejandro
2017-07-01
The scope of this work was to develop a synthetic single crystal diamond dosimeter (SCDD-Pro) for accurate relative dose measurements of clinical proton beams in water. Monte Carlo simulations were carried out based on the MCNPX code in order to investigate and reduce the dose curve perturbation caused by the SCDD-Pro. In particular, various diamond thicknesses were simulated to evaluate the influence of the active volume thickness (e AV) as well as the influence of the addition of a front silver resin (250 µm in thickness in front of the diamond crystal) on depth-dose curves. The simulations indicated that the diamond crystal alone, with a small e AV of just 5 µm, already affects the dose at Bragg peak position (Bragg peak dose) by more than 2% with respect to the Bragg peak dose deposited in water. The optimal design that resulted from the Monte Carlo simulations consists of a diamond crystal of 1 mm in width and 150 µm in thickness with the front silver resin, enclosed by a water-equivalent packaging. This design leads to a deviation between the Bragg peak dose from the full detector modeling and the Bragg peak dose deposited in water of less than 1.2%. Based on those optimizations, an SCDD-Pro prototype was built and evaluated in broad passive scattering proton beams. The experimental evaluation led to probed SCDD-Pro repeatability, dose rate dependence and linearity, that were better than 0.2%, 0.4% (in the 1.0-5.5 Gy min-1 range) and 0.4% (for dose higher than 0.05 Gy), respectively. The depth-dose curves in the 90-160 MeV energy range, measured with the SCDD-Pro without applying any correction, were in good agreement with those measured using a commercial IBA PPC05 plane-parallel ionization chamber, differing by less than 1.6%. The experimental results confirmed that this SCDD-Pro is suitable for measurements with standard electrometers and that the depth-dose curve perturbation is negligible, with no energy dependence and no significant dose rate dependence.
Moignier, Cyril; Tromson, Dominique; de Marzi, Ludovic; Marsolat, Fanny; García Hernández, Juan Carlos; Agelou, Mathieu; Pomorski, Michal; Woo, Romuald; Bourbotte, Jean-Michel; Moignau, Fabien; Lazaro, Delphine; Mazal, Alejandro
2017-07-07
The scope of this work was to develop a synthetic single crystal diamond dosimeter (SCDD-Pro) for accurate relative dose measurements of clinical proton beams in water. Monte Carlo simulations were carried out based on the MCNPX code in order to investigate and reduce the dose curve perturbation caused by the SCDD-Pro. In particular, various diamond thicknesses were simulated to evaluate the influence of the active volume thickness (e AV ) as well as the influence of the addition of a front silver resin (250 µm in thickness in front of the diamond crystal) on depth-dose curves. The simulations indicated that the diamond crystal alone, with a small e AV of just 5 µm, already affects the dose at Bragg peak position (Bragg peak dose) by more than 2% with respect to the Bragg peak dose deposited in water. The optimal design that resulted from the Monte Carlo simulations consists of a diamond crystal of 1 mm in width and 150 µm in thickness with the front silver resin, enclosed by a water-equivalent packaging. This design leads to a deviation between the Bragg peak dose from the full detector modeling and the Bragg peak dose deposited in water of less than 1.2%. Based on those optimizations, an SCDD-Pro prototype was built and evaluated in broad passive scattering proton beams. The experimental evaluation led to probed SCDD-Pro repeatability, dose rate dependence and linearity, that were better than 0.2%, 0.4% (in the 1.0-5.5 Gy min -1 range) and 0.4% (for dose higher than 0.05 Gy), respectively. The depth-dose curves in the 90-160 MeV energy range, measured with the SCDD-Pro without applying any correction, were in good agreement with those measured using a commercial IBA PPC05 plane-parallel ionization chamber, differing by less than 1.6%. The experimental results confirmed that this SCDD-Pro is suitable for measurements with standard electrometers and that the depth-dose curve perturbation is negligible, with no energy dependence and no significant dose rate dependence.
NASA Astrophysics Data System (ADS)
Yu, Jia-Feng; Sui, Tian-Xiang; Wang, Hong-Mei; Wang, Chun-Ling; Jing, Li; Wang, Ji-Hua
2015-12-01
Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. Project supported by the National Natural Science Foundation of China (Grant Nos. 61302186 and 61271378) and the Funding from the State Key Laboratory of Bioelectronics of Southeast University.
Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds
Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.
2016-01-01
Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612
Common Envelope Light Curves. I. Grid-code Module Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.
The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8 M {sub ⊙} red giant branch star interacts with a 0.6 M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less
RAPTOR. I. Time-dependent radiative transfer in arbitrary spacetimes
NASA Astrophysics Data System (ADS)
Bronzwaer, T.; Davelaar, J.; Younsi, Z.; Mościbrodzka, M.; Falcke, H.; Kramer, M.; Rezzolla, L.
2018-05-01
Context. Observational efforts to image the immediate environment of a black hole at the scale of the event horizon benefit from the development of efficient imaging codes that are capable of producing synthetic data, which may be compared with observational data. Aims: We aim to present RAPTOR, a new public code that produces accurate images, animations, and spectra of relativistic plasmas in strong gravity by numerically integrating the equations of motion of light rays and performing time-dependent radiative transfer calculations along the rays. The code is compatible with any analytical or numerical spacetime. It is hardware-agnostic and may be compiled and run both on GPUs and CPUs. Methods: We describe the algorithms used in RAPTOR and test the code's performance. We have performed a detailed comparison of RAPTOR output with that of other radiative-transfer codes and demonstrate convergence of the results. We then applied RAPTOR to study accretion models of supermassive black holes, performing time-dependent radiative transfer through general relativistic magneto-hydrodynamical (GRMHD) simulations and investigating the expected observational differences between the so-called fast-light and slow-light paradigms. Results: Using RAPTOR to produce synthetic images and light curves of a GRMHD model of an accreting black hole, we find that the relative difference between fast-light and slow-light light curves is less than 5%. Using two distinct radiative-transfer codes to process the same data, we find integrated flux densities with a relative difference less than 0.01%. Conclusions: For two-dimensional GRMHD models, such as those examined in this paper, the fast-light approximation suffices as long as errors of a few percent are acceptable. The convergence of the results of two different codes demonstrates that they are, at a minimum, consistent. The public version of RAPTOR is available at the following URL: http://https://github.com/tbronzwaer/raptor
Codes That Support Smart Growth Development
Provides examples of local zoning codes that support smart growth development, categorized by: unified development code, form-based code, transit-oriented development, design guidelines, street design standards, and zoning overlay.
The Influence of Building Codes on Recreation Facility Design.
ERIC Educational Resources Information Center
Morrison, Thomas A.
1989-01-01
Implications of building codes upon design and construction of recreation facilities are investigated (national building codes, recreation facility standards, and misperceptions of design requirements). Recreation professionals can influence architectural designers to correct past deficiencies, but they must understand architectural and…
Design of airborne imaging spectrometer based on curved prism
NASA Astrophysics Data System (ADS)
Nie, Yunfeng; Xiangli, Bin; Zhou, Jinsong; Wei, Xiaoxiao
2011-11-01
A novel moderate-resolution imaging spectrometer spreading from visible wavelength to near infrared wavelength range with a spectral resolution of 10 nm, which combines curved prisms with the Offner configuration, is introduced. Compared to conventional imaging spectrometers based on dispersive prism or diffractive grating, this design possesses characteristics of small size, compact structure, low mass as well as little spectral line curve (smile) and spectral band curve (keystone or frown). Besides, the usage of compound curved prisms with two or more different materials can greatly reduce the nonlinearity inevitably brought by prismatic dispersion. The utilization ratio of light radiation is much higher than imaging spectrometer of the same type based on combination of diffractive grating and concentric optics. In this paper, the Seidel aberration theory of curved prism and the optical principles of Offner configuration are illuminated firstly. Then the optical design layout of the spectrometer is presented, and the performance evaluation of this design, including spot diagram and MTF, is analyzed. To step further, several types of telescope matching this system are provided. This work provides an innovational perspective upon optical system design of airborne spectral imagers; therefore, it can offer theoretic guide for imaging spectrometer of the same kind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
Limb darkening effect on transit light curves of HAT-P-32b
NASA Astrophysics Data System (ADS)
ćuha, H.; Erdem, A.
2018-02-01
The transit light curve of a transiting exoplanet offers us an opportunity to measure the fractional radius, semi-major axis and orbital inclination of the star-planet system. Precision of those parameters is strongly affected by limb darkening of the host star. In this study, we examine the limb darkening effect on the transit light curves of HAT-P-32b. The transit light curves of HAT-P-32b were observed on three nights at TUBITAK National Observatory with a 100-cm aperture telescope in Bessel R and V filters. The light curves were solved using jktebop code. Linear, square root, logarithmic and quadratic limb darkening laws were taken into account during the analysis. The derived limb darkening coefficients from our observations were then compared with theoretical values given in the literature. We conclude that more sensitive data, such as space based photometric observations, are needed in order to determine the limb darkening coefficients accurately.
Error Correcting Codes and Related Designs
1990-09-30
Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS
NASA Astrophysics Data System (ADS)
Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel
2016-01-01
The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
DataRocket: Interactive Visualisation of Data Structures
NASA Astrophysics Data System (ADS)
Parkes, Steve; Ramsay, Craig
2010-08-01
CodeRocket is a software engineering tool that provides cognitive support to the software engineer for reasoning about a method or procedure and for documenting the resulting code [1]. DataRocket is a software engineering tool designed to support visualisation and reasoning about program data structures. DataRocket is part of the CodeRocket family of software tools developed by Rapid Quality Systems [2] a spin-out company from the Space Technology Centre at the University of Dundee. CodeRocket and DataRocket integrate seamlessly with existing architectural design and coding tools and provide extensive documentation with little or no effort on behalf of the software engineer. Comprehensive, abstract, detailed design documentation is available early on in a project so that it can be used for design reviews with project managers and non expert stakeholders. Code and documentation remain fully synchronised even when changes are implemented in the code without reference to the existing documentation. At the end of a project the press of a button suffices to produce the detailed design document. Existing legacy code can be easily imported into CodeRocket and DataRocket to reverse engineer detailed design documentation making legacy code more manageable and adding substantially to its value. This paper introduces CodeRocket. It then explains the rationale for DataRocket and describes the key features of this new tool. Finally the major benefits of DataRocket for different stakeholders are considered.
GridTool: A surface modeling and grid generation tool
NASA Technical Reports Server (NTRS)
Samareh-Abolhassani, Jamshid
1995-01-01
GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.
Simulation of Conformal Spiral Slot Antennas on Composite Platforms
NASA Technical Reports Server (NTRS)
Volakis, J. L.; Nurnberger, M. W.; Ozdemir,T.
1998-01-01
During the course of the grant, we wrote and distributed about 12 reports and an equal number of journal papers supported fully or in part by this grant. The list of reports (title & abstract) and papers are given in Appendices A and B. This grant has indeed been instrumental in developing a robust hybrid finite element method for the analysis of complex broadband antennas on doubly curved platforms. Previous to the grant, our capability was limited to simple printed patch antennas on mostly planar platforms. More specifically: (1) mixed element formulations were developed and new edge-based prisms were introduced; (2) these elements were important in permitting flexibility in geometry gridding for most antennas of interest; (3) new perfectly matched absorbers were introduced for mesh truncations associated with highly curved surfaces; (4) fast integral algorithms were introduced for boundary integral truncations reducing CPU time from O(N-2) down to O(N-1.5) or less; (5) frequency extrapolation schemes were developed for efficient broadband performance evaluations. This activity has been successfully continued by NASA researchers; (6) computer codes were developed and extensively tested for several broadband configurations. These include FEMA-CYL, FEMA-PRISM and FEMA-TETRA written by L. Kempel, T. Ozdemir and J. Gong, respectively; (7) a new infinite balun feed was designed nearly constant impedance over the 800-3000 MHz operational band; (8) a complete slot spiral antenna was developed, fabricated and tested at NASA Langley. This new design is a culmination of the projects goals and integrates the computational and experimental efforts. this antenna design resulted in a U.S. patent and was revised three times to achieve the desired bandwidth and gain requirements from 800-3000 MHz.
Transport and equilibrium in field-reversed mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.K.
Two plasma models relevant to compact torus research have been developed to study transport and equilibrium in field reversed mirrors. In the first model for small Larmor radius and large collision frequency, the plasma is described as an adiabatic hydromagnetic fluid. In the second model for large Larmor radius and small collision frequency, a kinetic theory description has been developed. Various aspects of the two models have been studied in five computer codes ADB, AV, NEO, OHK, RES. The ADB code computes two dimensional equilibrium and one dimensional transport in a flux coordinate. The AV code calculates orbit average integralsmore » in a harmonic oscillator potential. The NEO code follows particle trajectories in a Hill's vortex magnetic field to study stochasticity, invariants of the motion, and orbit average formulas. The OHK code displays analytic psi(r), B/sub Z/(r), phi(r), E/sub r/(r) formulas developed for the kinetic theory description. The RES code calculates resonance curves to consider overlap regions relevant to stochastic orbit behavior.« less
Xpatch prediction improvements to support multiple ATR applications
NASA Astrophysics Data System (ADS)
Andersh, Dennis J.; Lee, Shung W.; Moore, John T.; Sullivan, Douglas P.; Hughes, Jeff A.; Ling, Hao
1998-08-01
This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time-domain signature sand synthetic aperture radar (SAR) images of realistic 3D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, IGES curved surfaces, or solid geometries.The computer code, Xpatch, based on the shooting-and-bouncing-ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. Xpatch computers the first- bounce physical optics (PO) plus the physical theory of diffraction (PTD) contributions. Xpatch calculates the multi-bounce ray contributions by using geometric optics and PO for complex vehicles with materials. It has been found that the multi-bounce calculations, the radar return in typically 10 to 15 dB too low. Examples of predicted range profiles, SAR, imagery, and RCS for several different geometries are compared with measured data to demonstrate the quality of the predictions. Recent enhancements to Xpatch include improvements for millimeter wave applications and hybridization with finite element method for small geometric features and augmentation of additional IGES entities to support trimmed and untrimmed surfaces.
24 CFR 941.203 - Design and construction standards.
Code of Federal Regulations, 2013 CFR
2013-04-01
... national building code, such as Uniform Building Code, Council of American Building Officials Code, or Building Officials Conference of America Code; (2) Applicable State and local laws, codes, ordinances, and... intended to serve. Building design and construction shall strive to encourage in residents a proprietary...
24 CFR 941.203 - Design and construction standards.
Code of Federal Regulations, 2012 CFR
2012-04-01
... national building code, such as Uniform Building Code, Council of American Building Officials Code, or Building Officials Conference of America Code; (2) Applicable State and local laws, codes, ordinances, and... intended to serve. Building design and construction shall strive to encourage in residents a proprietary...
A computer program (MACPUMP) for interactive aquifer-test analysis
Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.
1995-01-01
This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.
Lattice dynamics of Ru2FeX (X = Si, Ge) Full Heusler alloys
NASA Astrophysics Data System (ADS)
Rizwan, M.; Afaq, A.; Aneeza, A.
2018-05-01
In present work, the lattice dynamics of Ru2FeX (X = Si, Ge) full Heusler alloys are investigated using density functional theory (DFT) within generalized gradient approximation (GGA) in a plane wave basis, with norm-conserving pseudopotentials. Phonon dispersion curves and phonon density of states are obtained using first-principles linear response approach of density functional perturbation theory (DFPT) as implemented in Quantum ESPRESSO code. Phonon dispersion curves indicates for both Heusler alloys that there is no imaginary phonon in whole Brillouin zone, confirming dynamical stability of these alloys in L21 type structure. There is a considerable overlapping between acoustic and optical phonon modes predicting no phonon band gap exists in dispersion curves of alloys. The same result is shown by phonon density of states curves for both Heusler alloys. Reststrahlen band for Ru2FeSi is found smaller than Ru2FeGe.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
Towards Improved Considerations of Risk in Seismic Design (Plinius Medal Lecture)
NASA Astrophysics Data System (ADS)
Sullivan, T. J.
2012-04-01
The aftermath of recent earthquakes is a reminder that seismic risk is a very relevant issue for our communities. Implicit within the seismic design standards currently in place around the world is that minimum acceptable levels of seismic risk will be ensured through design in accordance with the codes. All the same, none of the design standards specify what the minimum acceptable level of seismic risk actually is. Instead, a series of deterministic limit states are set which engineers then demonstrate are satisfied for their structure, typically through the use of elastic dynamic analyses adjusted to account for non-linear response using a set of empirical correction factors. From the early nineties the seismic engineering community has begun to recognise numerous fundamental shortcomings with such seismic design procedures in modern codes. Deficiencies include the use of elastic dynamic analysis for the prediction of inelastic force distributions, the assignment of uniform behaviour factors for structural typologies irrespective of the structural proportions and expected deformation demands, and the assumption that hysteretic properties of a structure do not affect the seismic displacement demands, amongst other things. In light of this a number of possibilities have emerged for improved control of risk through seismic design, with several innovative displacement-based seismic design methods now well developed. For a specific seismic design intensity, such methods provide a more rational means of controlling the response of a structure to satisfy performance limit states. While the development of such methodologies does mark a significant step forward for the control of seismic risk, they do not, on their own, identify the seismic risk of a newly designed structure. In the U.S. a rather elaborate performance-based earthquake engineering (PBEE) framework is under development, with the aim of providing seismic loss estimates for new buildings. The PBEE framework consists of the following four main analysis stages: (i) probabilistic seismic hazard analysis to give the mean occurrence rate of earthquake events having an intensity greater than a threshold value, (ii) structural analysis to estimate the global structural response, given a certain value of seismic intensity, (iii) damage analysis, in which fragility functions are used to express the probability that a building component exceeds a damage state, as a function of the global structural response, (iv) loss analysis, in which the overall performance is assessed based on the damage state of all components. This final step gives estimates of the mean annual frequency with which various repair cost levels (or other decision variables) are exceeded. The realisation of this framework does suggest that risk-based seismic design is now possible. However, comparing current code approaches with the proposed PBEE framework, it becomes apparent that mainstream consulting engineers would have to go through a massive learning curve in order to apply the new procedures in practice. With this in mind, it is proposed that simplified loss-based seismic design procedures are a logical means of helping the engineering profession transition from what are largely deterministic seismic design procedures in current codes, to more rational risk-based seismic design methodologies. Examples are provided to illustrate the likely benefits of adopting loss-based seismic design approaches in practice.
NASA Astrophysics Data System (ADS)
Kumar, Nitin; Singh, Udaybir; Kumar, Anil; Bhattacharya, Ranajoy; Singh, T. P.; Sinha, A. K.
2013-02-01
The design of 120 GHz, 1 MW gyrotron for plasma fusion application is presented in this paper. The mode selection is carried out considering the aim of minimum mode competition, minimum cavity wall heating, etc. On the basis of the selected operating mode, the interaction cavity design and beam-wave interaction computation are carried out by using the PIC code. The design of triode type Magnetron Injection Gun (MIG) is also presented. Trajectory code EGUN, synthesis code MIGSYN and data analysis code MIGANS are used in the MIG designing. Further, the design of MIG is also validated by using the another trajectory code TRAK. The design results of beam dumping system (collector) and RF window are also presented. Depressed collector is designed to enhance the overall tube efficiency. The design study confirms >1 MW output power with tube efficiency around 50% (with collector efficiency).
Rethinking non-inferiority: a practical trial design for optimising treatment duration.
Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb
2018-06-01
Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.
3D Modeling of Spectra and Light Curves of Hot Jupiters with PHOENIX; a First Approach
NASA Astrophysics Data System (ADS)
Jiménez-Torres, J. J.
2016-04-01
A detailed global circulation model was used to feed the PHOENIX code and calculate 3D spectra and light curves of hot Jupiters. Cloud free and dusty radiative fluxes for the planet HD179949b were modeled to show differences between them. The PHOENIX simulations can explain the broad features of the observed 8 μm light curves, including the fact that the planet-star flux ratio peaks before the secondary eclipse. The PHOENIX reflection spectrum matches the Spitzer secondary-eclipse depth at 3.6 μm and underpredicts eclipse depths at 4.5, 5.8 and 8.0 μm. These discrepancies result from the chemical composition and suggest the incorporation of different metallicities in future studies.
Growthcurver: an R package for obtaining interpretable metrics from microbial growth curves.
Sprouffske, Kathleen; Wagner, Andreas
2016-04-19
Plate readers can measure the growth curves of many microbial strains in a high-throughput fashion. The hundreds of absorbance readings collected simultaneously for hundreds of samples create technical hurdles for data analysis. Growthcurver summarizes the growth characteristics of microbial growth curve experiments conducted in a plate reader. The data are fitted to a standard form of the logistic equation, and the parameters have clear interpretations on population-level characteristics, like doubling time, carrying capacity, and growth rate. Growthcurver is an easy-to-use R package available for installation from the Comprehensive R Archive Network (CRAN). The source code is available under the GNU General Public License and can be obtained from Github (Sprouffske K, Growthcurver sourcecode, 2016).
Manufacturing complexity analysis
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1977-01-01
The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.
Numerically evaluating the bispectrum in curved field-space— with PyTransport 2.0
NASA Astrophysics Data System (ADS)
Ronayne, John W.; Mulryne, David J.
2018-01-01
We extend the transport framework for numerically evaluating the power spectrum and bispectrum in multi-field inflation to the case of a curved field-space metric. This method naturally accounts for all sub- and super-horizon tree level effects, including those induced by the curvature of the field-space. We present an open source implementation of our equations in an extension of the publicly available PyTransport code. Finally we illustrate how our technique is applied to examples of inflationary models with a non-trivial field-space metric.
A search for pulsations in two Algol-type systems V1241 Tau and GQ Dra
NASA Astrophysics Data System (ADS)
Ulaş, Burak; Ulusoy, Ceren; Gazeas, Kosmas; Erkan, Naci; Liakos, Alexios
2014-02-01
We present new photometric observations of two eclipsing binary systems, V1241 Tau and GQ Dra. We use the following methodology: initially, the Wilson-Devinney code is applied to the light curves in order to determine the photometric elements of the systems. Then, the residuals are analysed using Fourier techniques. The results are the following. One frequency can be possibly attributed to a real light variation of V1241 Tau, while there is no evidence of pulsations in the light curve of GQ Dra.
TDAAPS 2: Acoustic Wave Propagation in Attenuative Moving Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph A.
This report outlines recent enhancements to the TDAAPS algorithm first described by Symons et al., 2005. One of the primary additions to the code is the ability to specify an attenuative media using standard linear fluid mechanisms to match reasonably general frequency versus loss curves, including common frequency versus loss curves for the atmosphere and seawater. Other improvements that will be described are the addition of improved numerical boundary conditions via various forms of Perfectly Matched Layers, enhanced accuracy near high contrast media interfaces, and improved physics options.
49 CFR 1312.7 - STB tariff designation.
Code of Federal Regulations, 2014 CFR
2014-10-01
... tariff designation consisting of: (1) The characters “STB”; (2) The assigned alpha code of the carrier or... tariff 1000-A could be designated 1000-B, etc. (b) Alpha codes. Alpha codes are assigned to carriers and...
49 CFR 1312.7 - STB tariff designation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... tariff designation consisting of: (1) The characters “STB”; (2) The assigned alpha code of the carrier or... tariff 1000-A could be designated 1000-B, etc. (b) Alpha codes. Alpha codes are assigned to carriers and...
49 CFR 1312.7 - STB tariff designation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... tariff designation consisting of: (1) The characters “STB”; (2) The assigned alpha code of the carrier or... tariff 1000-A could be designated 1000-B, etc. (b) Alpha codes. Alpha codes are assigned to carriers and...
49 CFR 1312.7 - STB tariff designation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... tariff designation consisting of: (1) The characters “STB”; (2) The assigned alpha code of the carrier or... tariff 1000-A could be designated 1000-B, etc. (b) Alpha codes. Alpha codes are assigned to carriers and...
49 CFR 1312.7 - STB tariff designation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... tariff designation consisting of: (1) The characters “STB”; (2) The assigned alpha code of the carrier or... tariff 1000-A could be designated 1000-B, etc. (b) Alpha codes. Alpha codes are assigned to carriers and...
Reliability Based Geometric Design of Horizontal Circular Curves
NASA Astrophysics Data System (ADS)
Rajbongshi, Pabitra; Kalita, Kuldeep
2018-06-01
Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.
Predicting the seismic performance of typical R/C healthcare facilities: emphasis on hospitals
NASA Astrophysics Data System (ADS)
Bilgin, Huseyin; Frangu, Idlir
2017-09-01
Reinforced concrete (RC) type of buildings constitutes an important part of the current building stock in earthquake prone countries such as Albania. Seismic response of structures during a severe earthquake plays a vital role in the extent of structural damage and resulting injuries and losses. In this context, this study evaluates the expected performance of a five-story RC healthcare facility, representative of common practice in Albania, designed according to older codes. The design was based on the code requirements used in this region during the mid-1980s. Non-linear static and dynamic time history analyses were conducted on the structural model using the Zeus NL computer program. The dynamic time history analysis was conducted with a set of ground motions from real earthquakes. The building responses were estimated in global levels. FEMA 356 criteria were used to predict the seismic performance of the building. The structural response measures such as capacity curve and inter-story drift under the set of ground motions and pushover analyses results were compared and detailed seismic performance assessment was done. The main aim of this study is considering the application and methodology for the earthquake performance assessment of existing buildings. The seismic performance of the structural model varied significantly under different ground motions. Results indicate that case study building exhibit inadequate seismic performance under different seismic excitations. In addition, reasons for the poor performance of the building is discussed.
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.
2014-01-01
The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…
Structural design, analysis, and code evaluation of an odd-shaped pressure vessel
NASA Astrophysics Data System (ADS)
Rezvani, M. A.; Ziada, H. H.
1992-12-01
An effort to design, analyze, and evaluate a rectangular pressure vessel is described. Normally pressure vessels are designed in circular or spherical shapes to prevent stress concentrations. In this case, because of operational limitations, the choice of vessels was limited to a rectangular pressure box with a removable cover plate. The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code is used as a guideline for pressure containments whose width or depth exceeds 15.24 cm (6.0 in.) and where pressures will exceed 103.4 KPa (15.0 lbf/in(sup 2)). This evaluation used Section 8 of this Code, hereafter referred to as the Code. The dimensions and working pressure of the subject vessel fall within the pressure vessel category of the Code. The Code design guidelines and rules do not directly apply to this vessel. Therefore, finite-element methodology was used to analyze the pressure vessel, and the Code then was used in qualifying the vessel to be stamped to the Code. Section 8, Division 1 of the Code was used for evaluation. This action was justified by selecting a material for which fatigue damage would not be a concern. The stress analysis results were then checked against the Code, and the thicknesses adjusted to satisfy Code requirements. Although not directly applicable, the Code design formulas for rectangular vessels were also considered and presented.
Implementation of straight and curved steel girder erection design tools construction : summary.
DOT National Transportation Integrated Search
2010-11-05
Project 0-5574 Curved Plate Girder Design for Safe and Economical Construction, resulted in the : development of two design tools, UT Lift and UT Bridge. UT Lift is a spreadsheet-based program for analyzing : steel girders during lifting while ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montégiani, Jean-François; Gaudin, Émilie; Després, Philippe
2014-08-15
In peptide receptor radionuclide therapy (PRRT), huge inter-patient variability in absorbed radiation doses per administered activity mandates the utilization of individualized dosimetry to evaluate therapeutic efficacy and toxicity. We created a reliable GPU-calculated dosimetry code (irtGPUMCD) and assessed {sup 177}Lu-octreotate renal dosimetry in eight patients (4 cycles of approximately 7.4 GBq). irtGPUMCD was derived from a brachytherapy dosimetry code (bGPUMCD), which was adapted to {sup 177}Lu PRRT dosimetry. Serial quantitative single-photon emission computed tomography (SPECT) images were obtained from three SPECT/CT acquisitions performed at 4, 24 and 72 hours after {sup 177}Lu-octreotate administration, and registered with non-rigid deformation of CTmore » volumes, to obtain {sup 177}Lu-octreotate 4D quantitative biodistribution. Local energy deposition from the β disintegrations was assumed. Using Monte Carlo gamma photon transportation, irtGPUMCD computed dose rate at each time point. Average kidney absorbed dose was obtained from 1-cm{sup 3} VOI dose rate samples on each cortex, subjected to a biexponential curve fit. Integration of the latter time-dose rate curve yielded the renal absorbed dose. The mean renal dose per administered activity was 0.48 ± 0.13 Gy/GBq (range: 0.30–0.71 Gy/GBq). Comparison to another PRRT dosimetry code (VRAK: Voxelized Registration and Kinetics) showed fair accordance with irtGPUMCD (11.4 ± 6.8 %, range: 3.3–26.2%). These results suggest the possibility to use the irtGPUMCD code in order to personalize administered activity in PRRT. This could allow improving clinical outcomes by maximizing per-cycle tumor doses, without exceeding the tolerable renal dose.« less
Teusch, V I; Wohlgemuth, W A; Piehler, A P; Jung, E M
2014-01-01
Aim of our pilot study was the application of a contrast-enhanced color-coded ultrasound perfusion analysis in patients with vascular malformations to quantify microcirculatory alterations. 28 patients (16 female, 12 male, mean age 24.9 years) with high flow (n = 6) or slow-flow (n = 22) malformations were analyzed before intervention. An experienced examiner performed a color-coded Doppler sonography (CCDS) and a Power Doppler as well as a contrast-enhanced ultrasound after intravenous bolus injection of 1 - 2.4 ml of a second-generation ultrasound contrast medium (SonoVue®, Bracco, Milan). The contrast-enhanced examination was documented as a cine sequence over 60 s. The quantitative analysis based on color-coded contrast-enhanced ultrasound (CEUS) images included percentage peak enhancement (%peak), time to peak (TTP), area under the curve (AUC), and mean transit time (MTT). No side effects occurred after intravenous contrast injection. The mean %peak in arteriovenous malformations was almost twice as high as in slow-flow-malformations. The area under the curve was 4 times higher in arteriovenous malformations compared to the mean value of other malformations. The mean transit time was 1.4 times higher in high-flow-malformations compared to slow-flow-malformations. There was no difference regarding the time to peak between the different malformation types. The comparison between all vascular malformation and surrounding tissue showed statistically significant differences for all analyzed data (%peak, TTP, AUC, MTT; p < 0.01). High-flow and slow-flow vascular malformations had statistically significant differences in %peak (p < 0.01), AUC analysis (p < 0.01), and MTT (p < 0.05). Color-coded perfusion analysis of CEUS seems to be a promising technique for the dynamic assessment of microvasculature in vascular malformations.
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu
2014-01-01
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".
Designing the Alluvial Riverbeds in Curved Paths
NASA Astrophysics Data System (ADS)
Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina
2017-10-01
The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodson, R.J.; Feltus, M.A.
The low-temperature overpressurization protection system (LTOPS) is designed to protect the reactor pressure vessel (RPV) from brittle failure during startup and cooldown maneuvers in Westinghouse pressurized water reactors. For the Salem power plants, the power-operated relief valves (PORVs) mitigate pressure increases above a setpoint where an operational startup transient may put the RPV in the embrittlement fracture zone. The Title 10, Part 50, Code of Federal Regulations Appendix G limit, given by plant technical specifications, conservatively bounds the maximum pressure allowed during those transients where the RPV can suffer brittle fracture (usually below 350{degrees}F). The Appendix G limit is amore » pressure versus temperature curve that is more restrictive at lower RPV temperatures and allows for higher pressures as the temperature approaches the upper bounding fracture temperature.« less
A Silent Revolution: From Sketching to Coding--A Case Study on Code-Based Design Tool Learning
ERIC Educational Resources Information Center
Xu, Song; Fan, Kuo-Kuang
2017-01-01
Along with the information technology rising, Computer Aided Design activities are becoming more modern and more complex. But learning how to operation these new design tools has become the main problem lying in front of each designer. This study was purpose on finding problems encountered during code-based design tools learning period of…
Improved double-multiple streamtube model for the Darrieus-type vertical axis wind turbine
NASA Astrophysics Data System (ADS)
Berg, D. E.
Double streamtube codes model the curved blade (Darrieus-type) vertical axis wind turbine (VAWT) as a double actuator fish arrangement (one half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. Sandia National Laboratories developed a double multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. The theory underlying this VAWT model is described, as well as the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double multiple streamtube and a vortex filament code. The effects of neglecting dynamic stall and horizontal wind velocity distribution are also illustrated.
SAFETY IN THE DESIGN OF SCIENCE LABORATORIES AND BUILDING CODES.
ERIC Educational Resources Information Center
HOROWITZ, HAROLD
THE DESIGN OF COLLEGE AND UNIVERSITY BUILDINGS USED FOR SCIENTIFIC RESEARCH AND EDUCATION IS DISCUSSED IN TERMS OF LABORATORY SAFETY AND BUILDING CODES AND REGULATIONS. MAJOR TOPIC AREAS ARE--(1) SAFETY RELATED DESIGN FEATURES OF SCIENCE LABORATORIES, (2) LABORATORY SAFETY AND BUILDING CODES, AND (3) EVIDENCE OF UNSAFE DESIGN. EXAMPLES EMPHASIZE…
HERO - A 3D general relativistic radiative post-processor for accretion discs around black holes
NASA Astrophysics Data System (ADS)
Zhu, Yucong; Narayan, Ramesh; Sadowski, Aleksander; Psaltis, Dimitrios
2015-08-01
HERO (Hybrid Evaluator for Radiative Objects) is a 3D general relativistic radiative transfer code which has been tailored to the problem of analysing radiation from simulations of relativistic accretion discs around black holes. HERO is designed to be used as a post-processor. Given some fixed fluid structure for the disc (i.e. density and velocity as a function of position from a hydrodynamic or magnetohydrodynamic simulation), the code obtains a self-consistent solution for the radiation field and for the gas temperatures using the condition of radiative equilibrium. The novel aspect of HERO is that it combines two techniques: (1) a short-characteristics (SC) solver that quickly converges to a self-consistent disc temperature and radiation field, with (2) a long-characteristics (LC) solver that provides a more accurate solution for the radiation near the photosphere and in the optically thin regions. By combining these two techniques, we gain both the computational speed of SC and the high accuracy of LC. We present tests of HERO on a range of 1D, 2D, and 3D problems in flat space and show that the results agree well with both analytical and benchmark solutions. We also test the ability of the code to handle relativistic problems in curved space. Finally, we discuss the important topic of ray defects, a major limitation of the SC method, and describe our strategy for minimizing the induced error.
Lateral Stability and Steady State Curving Performance of Unconventional Rail Trucks
NASA Astrophysics Data System (ADS)
Dukkipati, Rao V.; Narayanaswamy, Srinivasan
Conventional railway vehicle systems exhibit hunting phenomenon which increases component wear and imposes operating speed limits. There is also a conflict between dynamic stability and the ability of the vehicle to steer around curves. Alternatively, independently rotating wheels (IRW) in a wheelset eliminate hunting but the wheelset guidance set capability is lost. A compromise solution is made possible by a modified design that exploits a lack of fore-and-aft symmetry in the suspension design. A comparative study on steady state curving performance and dynamic stability of some unconventional truck designs is carried out. The effects of suspension and conicity are considered to evaluate the trade-off between dynamic stability and curving performance.
Optimization of Composite Structures with Curved Fiber Trajectories
NASA Astrophysics Data System (ADS)
Lemaire, Etienne; Zein, Samih; Bruyneel, Michael
2014-06-01
This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.
Electron transport parameters in NF3
NASA Astrophysics Data System (ADS)
Lisovskiy, V.; Yegorenkov, V.; Ogloblina, P.; Booth, J.-P.; Martins, S.; Landry, K.; Douai, D.; Cassagne, V.
2014-03-01
We present electron transport parameters (the first Townsend coefficient, the dissociative attachment coefficient, the fraction of electron energy lost by collisions with NF3 molecules, the average and characteristic electron energy, the electron mobility and the drift velocity) in NF3 gas calculated from published elastic and inelastic electron-NF3 collision cross-sections using the BOLSIG+ code. Calculations were performed for the combined RB (Rescigno 1995 Phys. Rev. E 52 329, Boesten et al 1996 J. Phys. B: At. Mol. Opt. Phys. 29 5475) momentum-transfer cross-section, as well as for the JB (Joucoski and Bettega 2002 J. Phys. B: At. Mol. Opt. Phys. 35 783) momentum-transfer cross-section. In addition, we have measured the radio frequency (rf) breakdown curves for various inter-electrode gaps and rfs, and from these we have determined the electron drift velocity in NF3 from the location of the turning point in these curves. These drift velocity values are in satisfactory agreement with those calculated by the BOLSIG+ code employing the JB momentum-transfer cross-section.
3D Orbital Stability and Dynamic Environment of Asteroid 216 Kleopatra
NASA Astrophysics Data System (ADS)
Winter, Othon; Chanut, Thierry
A peculiar asteroid that might be the target of future space mission explorations is 216 Kleopatra, which has two small satellites and a peculiar dog-bone shape. Recent data processing showed the existence of a difference that can reach 25% for the dimensions of 216 Kleopatra between the radar observations and the light curves. We rebuild the shape of the asteroid 216 Kleopatra from these new data and estimate certain physical features by using the polyhedral model method. In our computations we use a code that avoids singularities from the line integrals of a homogeneous arbitrary shaped polyhedral source. This code evaluates the gravitational potential function and its first and second order derivatives. Then, we find the location of the and zero velocity curves. Finally, taking the rotation of asteroid 216 Kleopatra into consideration, the aims of this work is to analyze the stability against impact and the dynamics of numerical simulations of 3D initially equatorial and polar orbits near the body.
Multiband Photometric and Spectroscopic Analysis of HV Cnc
NASA Astrophysics Data System (ADS)
Gökay, G.; Gürol, B.; Derman, E.
2013-11-01
In this paper, radial velocity and VI- and JHKS - (Two Micron All Sky Survey) band photometric data of the detached system HV Cnc have been analyzed. The primary component of HV Cnc, which is a member of the M67 cluster, is suspected to be either a blue straggler or turn-off star. The system is a single-lined spectroscopic binary and its light curve shows a total eclipse. Spectroscopic observations of the system revealed the third component, which shows contribution to the total light of the system. Light curve and radial velocity data have been analyzed using the Wilson-Devinney (W-D) code and JHKS filter definitions computed for the W-D code in this work. Our analysis shows that the mass and radius of the primary and secondary components are 1.31 M ⊙, 0.52 M ⊙, 1.87 R ⊙, and 0.48 R ⊙, respectively. All results are compared with previously published literature values and discussed.
MULTIBAND PHOTOMETRIC AND SPECTROSCOPIC ANALYSIS OF HV Cnc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gökay, G.; Gürol, B.; Derman, E., E-mail: ggokay@science.ankara.edu.tr
2013-11-01
In this paper, radial velocity and VI- and JHK{sub S} - (Two Micron All Sky Survey) band photometric data of the detached system HV Cnc have been analyzed. The primary component of HV Cnc, which is a member of the M67 cluster, is suspected to be either a blue straggler or turn-off star. The system is a single-lined spectroscopic binary and its light curve shows a total eclipse. Spectroscopic observations of the system revealed the third component, which shows contribution to the total light of the system. Light curve and radial velocity data have been analyzed using the Wilson-Devinney (W-D)more » code and JHK{sub S} filter definitions computed for the W-D code in this work. Our analysis shows that the mass and radius of the primary and secondary components are 1.31 M {sub ☉}, 0.52 M {sub ☉}, 1.87 R {sub ☉}, and 0.48 R {sub ☉}, respectively. All results are compared with previously published literature values and discussed.« less
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Murthy, Pappu L. N.; Chamis, Christos C.
1994-01-01
A computational simulation procedure is presented for nonlinear analyses which incorporates microstress redistribution due to progressive fracture in ceramic matrix composites. This procedure facilitates an accurate simulation of the stress-strain behavior of ceramic matrix composites up to failure. The nonlinearity in the material behavior is accounted for at the constituent (fiber/matrix/interphase) level. This computational procedure is a part of recent upgrades to CEMCAN (Ceramic Matrix Composite Analyzer) computer code. The fiber substructuring technique in CEMCAN is used to monitor the damage initiation and progression as the load increases. The room-temperature tensile stress-strain curves for SiC fiber reinforced reaction-bonded silicon nitride (RBSN) matrix unidirectional and angle-ply laminates are simulated and compared with experimentally observed stress-strain behavior. Comparison between the predicted stress/strain behavior and experimental stress/strain curves is good. Collectively the results demonstrate that CEMCAN computer code provides the user with an effective computational tool to simulate the behavior of ceramic matrix composites.
Fatigue Life Methodology for Tapered Hybrid Composite Flexbeams
NASA Technical Reports Server (NTRS)
urri, Gretchen B.; Schaff, Jeffery R.
2006-01-01
Nonlinear-tapered flexbeam specimens from a full-size composite helicopter rotor hub flexbeam were tested under combined constant axial tension and cyclic bending loads. Two different graphite/glass hybrid configurations tested under cyclic loading failed by delamination in the tapered region. A 2-D finite element model was developed which closely approximated the flexbeam geometry, boundary conditions, and loading. The analysis results from two geometrically nonlinear finite element codes, ANSYS and ABAQUS, are presented and compared. Strain energy release rates (G) associated with simulated delamination growth in the flexbeams are presented from both codes. These results compare well with each other and suggest that the initial delamination growth from the tip of the ply-drop toward the thick region of the flexbeam is strongly mode II. The peak calculated G values were used with material characterization data to calculate fatigue life curves for comparison with test data. A curve relating maximum surface strain to number of loading cycles at delamination onset compared well with the test results.
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
A Proposal for the Maximum KIC for Use in ASME Code Flaw and Fracture Toughness Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirk, Mark; Stevens, Gary; Erickson, Marjorie A
2011-01-01
Nonmandatory Appendices A [1] and G [2] of Section XI of the ASME Code use the KIc curve (indexed to the material reference transition temperature, RTNDT) in reactor pressure vessel (RPV) flaw evaluations, and for the purpose of establishing RPV pressure-temperature (P-T) limits. Neither of these appendices places an upper-limit on the KIc value that may be used in these assessments. Over the years, it has often been suggested by some of the members of the ASME Section XI Code committees that are responsible for maintaining Appendices A and G that there is a practical upper limit of 200 ksimore » in (220 MPa m) [4]. This upper limit is not well recognized by all users of the ASME Code, is not explicitly documented within the Code itself, and the one source known to the authors where it is defended [4] relies on data that is either in error, or is less than 220 MPa m. However, as part of the NRC/industry pressurized thermal shock (PTS) re-evaluation effort, empirical models were developed that propose common temperature dependencies for all ferritic steels operating on the upper shelf. These models relate the fracture toughness properties in the transition regime to those on the upper shelf and, combined with data for a wide variety of RPV steels and welds on which they are based, suggest that the practical upper limit of 220 MPa m exceeds the upper shelf fracture toughness of most RPV steels by a considerable amount, especially for irradiated steels. In this paper, available models and data are used to propose upper bound limits of applicability on the KIc curve for use in ASME Code, Section XI, Nonmandatory Appendices A and G evaluations that are consistent with available data for RPV steels.« less
Orbital alignment and star-spot properties in the WASP-52 planetary system
NASA Astrophysics Data System (ADS)
Mancini, L.; Southworth, J.; Raia, G.; Tregloan-Reed, J.; Mollière, P.; Bozza, V.; Bretton, M.; Bruni, I.; Ciceri, S.; D'Ago, G.; Dominik, M.; Hinse, T. C.; Hundertmark, M.; Jørgensen, U. G.; Korhonen, H.; Rabus, M.; Rahvar, S.; Starkey, D.; Calchi Novati, S.; Figuera Jaimes, R.; Henning, Th.; Juncher, D.; Haugbølle, T.; Kains, N.; Popovas, A.; Schmidt, R. W.; Skottfelt, J.; Snodgrass, C.; Surdej, J.; Wertz, O.
2017-02-01
We report 13 high-precision light curves of eight transits of the exoplanet WASP-52 b, obtained by using four medium-class telescopes, through different filters, and adopting the defocussing technique. One transit was recorded simultaneously from two different observatories and another one from the same site but with two different instruments, including a multiband camera. Anomalies were clearly detected in five light curves and modelled as star-spots occulted by the planet during the transit events. We fitted the clean light curves with the JKTEBOP code, and those with the anomalies with the PRISM+GEMC codes in order to simultaneously model the photometric parameters of the transits and the position, size and contrast of each star-spot. We used these new light curves and some from the literature to revise the physical properties of the WASP-52 system. Star-spots with similar characteristics were detected in four transits over a period of 43 d. In the hypothesis that we are dealing with the same star-spot, periodically occulted by the transiting planet, we estimated the projected orbital obliquity of WASP-52 b to be λ = 3.8° ± 8.4°. We also determined the true orbital obliquity, ψ = 20° ± 50°, which is, although very uncertain, the first measurement of ψ purely from star-spot crossings. We finally assembled an optical transmission spectrum of the planet and searched for variations of its radius as a function of wavelength. Our analysis suggests a flat transmission spectrum within the experimental uncertainties.
Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B
2016-08-08
A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.
NASA Astrophysics Data System (ADS)
Guarnieri, Vittorio; Francini, Franco
1997-12-01
Last generation of digital printer is usually characterized by a spatial resolution enough high to allow the designer to realize a binary CGH directly on a transparent film avoiding photographic reduction techniques. These devices are able to produce slides or offset prints. Furthermore, services supplied by commercial printing company provide an inexpensive method to rapidly verify the validity of the design by means of a test-and-trial process. Notably, this low-cost approach appears to be suitable for a didactical environment. On the basis of these considerations, a set of software tools able to design CGH's has been developed. The guidelines inspiring the work have been the following ones: (1) ray-tracing approach, considering the object to be reproduced as source of spherical waves; (2) Optimization and speed-up of the algorithms used, in order to produce a portable code, runnable on several hardware platforms. In this paper calculation methods to obtain some fundamental geometric functions (points, lines, curves) are described. Furthermore, by the juxtaposition of these primitives functions it is possible to produce the holograms of more complex objects. Many examples of generated CGHs are presented.
ISS mapped from ICD-9-CM by a novel freeware versus traditional coding: a comparative study.
Di Bartolomeo, Stefano; Tillati, Silvia; Valent, Francesca; Zanier, Loris; Barbone, Fabio
2010-03-31
Injury severity measures are based either on the Abbreviated Injury Scale (AIS) or the International Classification of diseases (ICD). The latter is more convenient because routinely collected by clinicians for administrative reasons. To exploit this advantage, a proprietary program that maps ICD-9-CM into AIS codes has been used for many years. Recently, a program called ICDPIC trauma and developed in the USA has become available free of charge for registered STATA users. We compared the ICDPIC calculated Injury Severity Score (ISS) with the one from direct, prospective AIS coding by expert trauma registrars (dAIS). The administrative records of the 289 major trauma cases admitted to the hospital of Udine-Italy from 1 July 2004 to 30 June 2005 and enrolled in the Italian Trauma Registry were retrieved and ICDPIC-ISS was calculated. The agreement between ICDPIC-ISS and dAIS-ISS was assessed by Cohen's Kappa and Bland-Altman charts. We then plotted the differences between the 2 scores against the ratio between the number of traumatic ICD-9-CM codes and the number of dAIS codes for each patient (DIARATIO). We also compared the absolute differences in ISS among 3 groups identified by DIARATIO. The discriminative power for survival of both scores was finally calculated by ROC curves. The scores matched in 33/272 patients (12.1%, k 0.07) and, when categorized, in 80/272 (22.4%, k 0.09). The Bland-Altman average difference was 6.36 (limits: minus 22.0 to plus 34.7). ICDPIC-ISS of 75 was particularly unreliable. The differences increased (p < 0.01) as DIARATIO increased indicating incomplete administrative coding as a cause of the differences. The area under the curve of ICDPIC-ISS was lower (0.63 vs. 0.76, p = 0.02). Despite its great potential convenience, ICPIC-ISS agreed poorly with its conventionally calculated counterpart. Its discriminative power for survival was also significantly lower. Incomplete ICD-9-CM coding was a main cause of these findings. Because this quality of coding is standard in Italy and probably in other European countries, its effects on the performances of other trauma scores based on ICD administrative data deserve further research. Mapping ICD-9-CM code 862.8 to AIS of 6 is an overestimation.
Data Sciences Summer Institute Topology Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Seth
DSSI_TOPOPT is a 2D topology optimization code that designs stiff structures made of a single linear elastic material and void space. The code generates a finite element mesh of a rectangular design domain on which the user specifies displacement and load boundary conditions. The code iteratively designs a structure that minimizes the compliance (maximizes the stiffness) of the structure under the given loading, subject to an upper bound on the amount of material used. Depending on user options, the code can evaluate the performance of a user-designed structure, or create a design from scratch. Output includes the finite element mesh,more » design, and visualizations of the design.« less
Minimizing yagi-uda radiosonde receiver antenna size using minkowski curve fractal model
NASA Astrophysics Data System (ADS)
Sani, Arman; Suherman
2018-03-01
This paper discusses Yagi-Uda antenna design for radiosonde earth station receiver. The design was performed by using Minkowski curve fractal model to reduce physical dimension. The antenna design should fulfil the following requirements: work on frequency of 433MHz, match to the 50 Ohm of radiosonde characteristic impedance, the expected gain is higher than 10 dBi, VSWR is smaller than 2 and the expected bandwidth is higher than 10 MHz. Antenna design and evaluation were conducted by using MMANA-GAL simulator. The evaluation of the designed antenna shows that the Yagi-Uda antenna designed by using Minkowski curve model successfully reduces antenna size up to 9.41% and reduces number of elements about 33%.
Evaluation of horizontal curve design
DOT National Transportation Integrated Search
1980-08-01
This report documents an initial evaluation of horizontal curve design criteria which involved two phases: an observational study and an analytical evaluation. Three classes of vehicles (automobiles, school buses and tractor semi-trailers) and three ...
Design of air-gapped magnetic-core inductors for superimposed direct and alternating currents
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.
1976-01-01
Using data on standard magnetic-material properties and standard core sizes for air-gap-type cores, an algorithm designed for a computer solution is developed which optimally determines the air-gap length and locates the quiescent point on the normal magnetization curve so as to yield an inductor design with the minimum number of turns for a given ac voltage and frequency and with a given dc bias current superimposed in the same winding. Magnetic-material data used in the design are the normal magnetization curve and a family of incremental permeability curves. A second procedure, which requires a simpler set of calculations, starts from an assigned quiescent point on the normal magnetization curve and first screens candidate core sizes for suitability, then determines the required turns and air-gap length.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumann, K; Weber, U; Simeonov, Y
2015-06-15
Purpose: Aim of this study was to analyze the modulating, broadening effect on the Bragg Peak due to heterogeneous geometries like multi-wire chambers in the beam path of a particle therapy beam line. The effect was described by a mathematical model which was implemented in the Monte-Carlo code FLUKA via user-routines, in order to reduce the computation time for the simulations. Methods: The depth dose curve of 80 MeV/u C12-ions in a water phantom was calculated using the Monte-Carlo code FLUKA (reference curve). The modulating effect on this dose distribution behind eleven mesh-like foils (periodicity ∼80 microns) occurring in amore » typical set of multi-wire and dose chambers was mathematically described by optimizing a normal distribution so that the reverence curve convoluted with this distribution equals the modulated dose curve. This distribution describes a displacement in water and was transferred in a probability distribution of the thickness of the eleven foils using the water equivalent thickness of the foil’s material. From this distribution the distribution of the thickness of one foil was determined inversely. In FLUKA the heterogeneous foils were replaced by homogeneous foils and a user-routine was programmed that varies the thickness of the homogeneous foils for each simulated particle using this distribution. Results: Using the mathematical model and user-routine in FLUKA the broadening effect could be reproduced exactly when replacing the heterogeneous foils by homogeneous ones. The computation time was reduced by 90 percent. Conclusion: In this study the broadening effect on the Bragg Peak due to heterogeneous structures was analyzed, described by a mathematical model and implemented in FLUKA via user-routines. Applying these routines the computing time was reduced by 90 percent. The developed tool can be used for any heterogeneous structure in the dimensions of microns to millimeters, in principle even for organic materials like lung tissue.« less
Evaluation of the DRAGON code for VHTR design analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division
2006-01-12
This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less
Building a Better Campus: An Update on Building Codes.
ERIC Educational Resources Information Center
Madden, Michael J.
2002-01-01
Discusses the implications for higher education institutions in terms of facility planning, design, construction, and renovation of the move from regionally-developed model-building codes to two international sets of codes. Also addresses the new performance-based design option within the codes. (EV)
SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Rutel, I; Yang, K
2016-06-15
Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less
SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Rutel, I; Wu, D
Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less
Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints
Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.
2015-01-01
Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575
Simpkin, D J
1989-02-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP.
An analytical approach to obtaining JWL parameters from cylinder tests
NASA Astrophysics Data System (ADS)
Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.
2017-01-01
An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.
NASA Astrophysics Data System (ADS)
Fukushima, Takuma; To, Sho; Asano, Katsuaki; Fujita, Yutaka
2017-08-01
We numerically simulate the gamma-ray burst (GRB) afterglow emission with a one-zone time-dependent code. The temporal evolutions of the decelerating shocked shell and energy distributions of electrons and photons are consistently calculated. The photon spectrum and light curves for an observer are obtained taking into account the relativistic propagation of the shocked shell and the curvature of the emission surface. We find that the onset time of the afterglow is significantly earlier than the previous analytical estimate. The analytical formulae of the shock propagation and light curve for the radiative case are also different from our results. Our results show that even if the emission mechanism is switching from synchrotron to synchrotron self-Compton, the gamma-ray light curves can be a smooth power law, which agrees with the observed light curve and the late detection of a 32 GeV photon in GRB 130427A. The uncertainty of the model parameters obtained with the analytical formula is discussed, especially in connection with the closure relation between spectral index and decay index.
Projectile Shape Effects Analysis for Space Debris Impact
NASA Astrophysics Data System (ADS)
Shiraki, Kuniaki; Yamamoto, Tetsuya; Kamiya, Takeshi
2002-01-01
(JEM IST), has a manned pressurized module used as a research laboratory on orbit and planned to be attached to the International Space Station (ISS). Protection system from Micrometeoroids and orbital debris (MM/OD) is very important for crew safety aboard the ISS. We have to design a module with shields attached to the outside of the pressurized wall so that JEM can be protected when debris of diameter less than 20mm impact on the JEM wall. In this case, the ISS design requirement for space debris protection system is specified as the Probability of No Penetration (PNP). The PNP allocation for the JEM is 0.9738 for ten years, which is reallocated as 0.9814 for the Pressurized Module (PM) and 0.9922 for the Experiment Logistics Module-Pressurized Section (ELM-PS). The PNP is calculated with Bumper code provided by NASA with the following data inputs to the calculation. (1) JEM structural model (2) Ballistic Limit Curve (BLC) of shields pressure wall (3) Environmental conditions: Analysis type, debris distribution, debris model, debris density, Solar single aluminum plate bumper (1.27mm thickness). The other is a Stuffed Whipple shield with its second bumper composed of an aluminum mesh, three layers of Nextel AF62 ceramic fabric, and four layers of Kevlar 710 fabric with thermal isolation material Multilayer Insulation (MLI) in the bottom. The second bumper of Stuffed Whipple shields is located at the middle between the first bumper and the 4.8 mm-thick pressurized wall. with Two-Stage Light Gas Gun (TSLGG) tests and hydro code simulation already. The remaining subject is the verification of JEM debris protection shields for velocities ranging from 7 to 15 km/sec. We conducted Conical Shaped Charge (CSC) tests that enable hypervelocity impact tests for the debris velocity range above 10 km/sec as well as hydro code simulation. because of the jet generation mechanism. It is therefore necessary to analyze and compensate the results for a solid aluminum sphere, which is the design requirement.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
CombiROC: an interactive web tool for selecting accurate marker combinations of omics data.
Mazzara, Saveria; Rossi, Riccardo L; Grifantini, Renata; Donizetti, Simone; Abrignani, Sergio; Bombaci, Mauro
2017-03-30
Diagnostic accuracy can be improved considerably by combining multiple markers, whose performance in identifying diseased subjects is usually assessed via receiver operating characteristic (ROC) curves. The selection of multimarker signatures is a complicated process that requires integration of data signatures with sophisticated statistical methods. We developed a user-friendly tool, called CombiROC, to help researchers accurately determine optimal markers combinations from diverse omics methods. With CombiROC data from different domains, such as proteomics and transcriptomics, can be analyzed using sensitivity/specificity filters: the number of candidate marker panels rising from combinatorial analysis is easily optimized bypassing limitations imposed by the nature of different experimental approaches. Leaving to the user full control on initial selection stringency, CombiROC computes sensitivity and specificity for all markers combinations, performances of best combinations and ROC curves for automatic comparisons, all visualized in a graphic interface. CombiROC was designed without hard-coded thresholds, allowing a custom fit to each specific data: this dramatically reduces the computational burden and lowers the false negative rates given by fixed thresholds. The application was validated with published data, confirming the marker combination already originally described or even finding new ones. CombiROC is a novel tool for the scientific community freely available at http://CombiROC.eu.
Trellis coding techniques for mobile communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Jedrey, T.
1988-01-01
A criterion for designing optimum trellis codes to be used over fading channels is given. A technique is shown for reducing certain multiple trellis codes, optimally designed for the fading channel, to conventional (i.e., multiplicity one) trellis codes. The computational cutoff rate R0 is evaluated for MPSK transmitted over fading channels. Examples of trellis codes optimally designed for the Rayleigh fading channel are given and compared with respect to R0. Two types of modulation/demodulation techniques are considered, namely coherent (using pilot tone-aided carrier recovery) and differentially coherent with Doppler frequency correction. Simulation results are given for end-to-end performance of two trellis-coded systems.
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford
2018-04-01
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.
Characteristics code for shock initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Partom, Y.
1986-10-01
We developed SHIN, a characteristics code for shock initiation studies. We describe in detail the equations of state, reaction model, rate equations, and numerical difference equations that SHIN incorporates. SHIN uses the previously developed surface burning reaction model which better represents the shock initiation process in TATB, than do bulk reaction models. A large number of computed simulations prove the code is a reliable and efficient tool for shock initiation studies. A parametric study shows the effect on build-up and run distance to detonation of (1) type of boundary condtion, (2) burning velocity curve, (3) shock duration, (4) rise timemore » in ramp loading, (5) initial density (or porosity) of the explosive, (6) initial temperature, and (7) grain size. 29 refs., 65 figs.« less
Numerical solution of Space Shuttle Orbiter flow field including real gas effects
NASA Technical Reports Server (NTRS)
Prabhu, D. K.; Tannehill, J. C.
1984-01-01
The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.
Centrifugal and Axial Pump Design and Off-Design Performance Prediction
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1995-01-01
A meanline pump-flow modeling method has been developed to provide a fast capability for modeling pumps of cryogenic rocket engines. Based on this method, a meanline pump-flow code PUMPA was written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The design-point rotor efficiency and slip factors are obtained from empirical correlations to rotor-specific speed and geometry. The pump code can model axial, inducer, mixed-flow, and centrifugal pumps and can model multistage pumps in series. The rapid input setup and computer run time for this meanline pump flow code make it an effective analysis and conceptual design tool. The map-generation capabilities of the code provide the information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of PUMPA permit the user to do parametric design space exploration of candidate pump configurations and to provide head-flow maps for engine system evaluation.
Study of curved glass photovoltaic module and module electrical isolation design requirements
NASA Technical Reports Server (NTRS)
1980-01-01
The design of a 1.2 by 2.4 m curved glass superstrate and support clip assembly is presented, along with the results of finite element computer analysis and a glass industry survey conducted to assess the technical and economic feasibility of the concept. Installed costs for four curved glass module array configurations are estimated and compared with cost previously reported for comparable flat glass module configurations. Electrical properties of candidate module encapsulation systems are evaluated along with present industry practice for the design and testing of electrical insulation systems. Electric design requirements for module encapsulation systems are also discussed.
Study of curved glass photovoltaic module and module electrical isolation design requirements
NASA Astrophysics Data System (ADS)
1980-06-01
The design of a 1.2 by 2.4 m curved glass superstrate and support clip assembly is presented, along with the results of finite element computer analysis and a glass industry survey conducted to assess the technical and economic feasibility of the concept. Installed costs for four curved glass module array configurations are estimated and compared with cost previously reported for comparable flat glass module configurations. Electrical properties of candidate module encapsulation systems are evaluated along with present industry practice for the design and testing of electrical insulation systems. Electric design requirements for module encapsulation systems are also discussed.
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Side information in coded aperture compressive spectral imaging
NASA Astrophysics Data System (ADS)
Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.
2017-02-01
Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
Comparison of optimization algorithms for the slow shot phase in HPDC
NASA Astrophysics Data System (ADS)
Frings, Markus; Berkels, Benjamin; Behr, Marek; Elgeti, Stefanie
2018-05-01
High-pressure die casting (HPDC) is a popular manufacturing process for aluminum processing. The slow shot phase in HPDC is the first phase of this process. During this phase, the molten metal is pushed towards the cavity under moderate plunger movement. The so-called shot curve describes this plunger movement. A good design of the shot curve is important to produce high-quality cast parts. Three partially competing process goals characterize the slow shot phase: (1) reducing air entrapment, (2) avoiding temperature loss, and (3) minimizing oxide caused by the air-aluminum contact. Due to the rough process conditions with high pressure and temperature, it is hard to design the shot curve experimentally. There exist a few design rules that are based on theoretical considerations. Nevertheless, the quality of the shot curve design still depends on the experience of the machine operator. To improve the shot curve it seems to be natural to use numerical optimization. This work compares different optimization strategies for the slow shot phase optimization. The aim is to find the best optimization approach on a simple test problem.
Multi-scale modeling of irradiation effects in spallation neutron source materials
NASA Astrophysics Data System (ADS)
Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.
2011-07-01
Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.
76 FR 11432 - Coding of Design Marks in Registrations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... on the old paper search designations. The USPTO will continue to code all pending applications that... system, the Trademark Search Facility Classification Code Index (``TC Index''), stems from its... infrequent use of the TC Index codes in searches by the public; and its costliness to maintain, especially in...
NASA Astrophysics Data System (ADS)
Martinez, Rudy D.
A multiaxial fatigue model is proposed, as it would apply to cylindrical geometry in the form of industrial sized pressure vessels. The main focus of the multiaxial fatigue model will be based on using energy methods with the loading states confined to fluctuating tractions under proportional loading. The proposed fatigue model is an effort to support and enhance existing fatigue life predicting methods for pressure vessel design, beyond the ASME Boiler and Pressure Vessel codes, ASME Section VIII Division 2 and 3, which is currently used in industrial engineering practice for pressure vessel design. Both uniaxial and biaxial low alloy pearlittic-ferritic steel cylindrical cyclic test data are utilized to substantiate the proposed fatigue model. Approximate material hardening and softening aspects from applied load cycling states and the Bauschinger effect are accounted for by adjusting strain control generated hysteresis loops and the cyclic stress strain curve. The proposed fatigue energy model and the current ASME fatigue model are then compared with regards to the accuracy of predicting fatigue life cycle consistencies.
NEXT Performance Curve Analysis and Validation
NASA Technical Reports Server (NTRS)
Saripalli, Pratik; Cardiff, Eric; Englander, Jacob
2016-01-01
Performance curves of the NEXT thruster are highly important in determining the thruster's ability in performing towards mission-specific goals. New performance curves are proposed and examined here. The Evolutionary Mission Trajectory Generator (EMTG) is used to verify variations in mission solutions based on both available thruster curves and the new curves generated. Furthermore, variations in BOL and EOL curves are also examined. Mission design results shown here validate the use of EMTG and the new performance curves.
Design geometry and design/off-design performance computer codes for compressors and turbines
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1995-01-01
This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.
Working research codes into fluid dynamics education: a science gateway approach
NASA Astrophysics Data System (ADS)
Mason, Lachlan; Hetherington, James; O'Reilly, Martin; Yong, May; Jersakova, Radka; Grieve, Stuart; Perez-Suarez, David; Klapaukh, Roman; Craster, Richard V.; Matar, Omar K.
2017-11-01
Research codes are effective for illustrating complex concepts in educational fluid dynamics courses, compared to textbook examples, an interactive three-dimensional visualisation can bring a problem to life! Various barriers, however, prevent the adoption of research codes in teaching: codes are typically created for highly-specific `once-off' calculations and, as such, have no user interface and a steep learning curve. Moreover, a code may require access to high-performance computing resources that are not readily available in the classroom. This project allows academics to rapidly work research codes into their teaching via a minimalist `science gateway' framework. The gateway is a simple, yet flexible, web interface allowing students to construct and run simulations, as well as view and share their output. Behind the scenes, the common operations of job configuration, submission, monitoring and post-processing are customisable at the level of shell scripting. In this talk, we demonstrate the creation of an example teaching gateway connected to the Code BLUE fluid dynamics software. Student simulations can be run via a third-party cloud computing provider or a local high-performance cluster. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.
Rocketdyne/Westinghouse nuclear thermal rocket engine modeling
NASA Technical Reports Server (NTRS)
Glass, James F.
1993-01-01
The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.
Review Of Piping And Pressure Vessel Code Design Criteria. Technical Report 217.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
1969-04-18
This Technical Report summarizes a review of the design philosophies and criteria of the ASME Boiler and Pressure Vessel Code and the USASI Code for Pressure Piping. It traces the history of the Codes since their inception and critically reviews their present status. Recommendations are made concerning the applicability of the Codes to the special needs of LMFBR liquid sodium piping.
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
Computation of the phase response curve: a direct numerical approach.
Govaerts, W; Sautois, B
2006-04-01
Neurons are often modeled by dynamical systems--parameterized systems of differential equations. A typical behavioral pattern of neurons is periodic spiking; this corresponds to the presence of stable limit cycles in the dynamical systems model. The phase resetting and phase response curves (PRCs) describe the reaction of the spiking neuron to an input pulse at each point of the cycle. We develop a new method for computing these curves as a by-product of the solution of the boundary value problem for the stable limit cycle. The method is mathematically equivalent to the adjoint method, but our implementation is computationally much faster and more robust than any existing method. In fact, it can compute PRCs even where the limit cycle can hardly be found by time integration, for example, because it is close to another stable limit cycle. In addition, we obtain the discretized phase response curve in a form that is ideally suited for most applications. We present several examples and provide the implementation in a freely available Matlab code.
NASA Astrophysics Data System (ADS)
Oshagh, M.; Boisse, I.; Boué, G.; Montalto, M.; Santos, N. C.; Bonfils, X.; Haghighipour, N.
2013-01-01
We present an improved version of SOAP named "SOAP-T", which can generate the radial velocity variations and light curves for systems consisting of a rotating spotted star with a transiting planet. This tool can be used to study the anomalies inside transit light curves and the Rossiter-McLaughlin effect, to better constrain the orbital configuration and properties of planetary systems and the active zones of their host stars. Tests of the code are presented to illustrate its performance and to validate its capability when compared with analytical models and real data. Finally, we apply SOAP-T to the active star, HAT-P-11, observed by the NASA Kepler space telescope and use this system to discuss the capability of this tool in analyzing light curves for the cases where the transiting planet overlaps with the star's spots. The tool's public interface is available at http://www.astro.up.pt/resources/soap-t/
NASA Astrophysics Data System (ADS)
Roshanian, Jafar; Jodei, Jahangir; Mirshams, Mehran; Ebrahimi, Reza; Mirzaee, Masood
A new automated multi-level of fidelity Multi-Disciplinary Design Optimization (MDO) methodology has been developed at the MDO Laboratory of K.N. Toosi University of Technology. This paper explains a new design approach by formulation of developed disciplinary modules. A conceptual design for a small, solid-propellant launch vehicle was considered at two levels of fidelity structure. Low and medium level of fidelity disciplinary codes were developed and linked. Appropriate design and analysis codes were defined according to their effect on the conceptual design process. Simultaneous optimization of the launch vehicle was performed at the discipline level and system level. Propulsion, aerodynamics, structure and trajectory disciplinary codes were used. To reach the minimum launch weight, the Low LoF code first searches the whole design space to achieve the mission requirements. Then the medium LoF code receives the output of the low LoF and gives a value near the optimum launch weight with more details and higher fidelity.
NASA Technical Reports Server (NTRS)
Schmidt, James F.
1995-01-01
An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, M.E.
1997-12-05
This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-30
...] FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code Names... replaced non- informative code names with descriptive identifiers on its public database of products that... on our public database with non-informative code names. After careful consideration of this matter...
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
Design applications for supercomputers
NASA Technical Reports Server (NTRS)
Studerus, C. J.
1987-01-01
The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
Interactive-graphic flowpath plotting for turbine engines
NASA Technical Reports Server (NTRS)
Corban, R. R.
1981-01-01
An engine cycle program capable of simulating the design and off-design performance of arbitrary turbine engines, and a computer code which, when used in conjunction with the cycle code, can predict the weight of the engines are described. A graphics subroutine was added to the code to enable the engineer to visualize the designed engine with more clarity by producing an overall view of the designed engine for output on a graphics device using IBM-370 graphics subroutines. In addition, with the engine drawn on a graphics screen, the program allows for the interactive user to make changes to the inputs to the code for the engine to be redrawn and reweighed. These improvements allow better use of the code in conjunction with the engine program.
Gpufit: An open-source toolkit for GPU-accelerated curve fitting.
Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark
2017-11-16
We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.
Finite element thermal analysis of multispectral coatings for the ABL
NASA Astrophysics Data System (ADS)
Shah, Rashmi S.; Bettis, Jerry R.; Stewart, Alan F.; Bonsall, Lynn; Copland, James; Hughes, William; Echeverry, Juan C.
1999-04-01
The thermal response of a coated optical surface is an important consideration in the design of any high average power system. Finite element temperature distribution were calculated for both coating witness samples and calorimetry wafers and were compared to actual measured data under tightly controlled conditions. Coatings for ABL were deposited on various substrates including fused silica, ULE, Zerodur, and silicon. The witness samples were irradiate data high power levels at 1.315micrometers to evaluate laser damage thresholds and study absorption levels. Excellent agreement was obtained between temperature predictions and measured thermal response curves. When measured absorption values were not available, the code was used to predict coating absorption based on the measured temperature rise on the back surface. Using the finite element model, the damaging temperature rise can be predicted for a coating with known absorption based on run time, flux, and substrate material.
Beyond a Dichotomic Approach, The Case of Colour Phenomena
NASA Astrophysics Data System (ADS)
Viennot, L.; de Hosson, C.
2012-06-01
This research documents the aims and the impact of a teaching experiment concerning colour phenomena. This teaching experiment is designed in order to make students consider not only the spectral composition of light but also its intensity, and to consider the absorption of light by a pigment as relative, instead of as total or zero. Eight teaching interviews conducted with third-year university students were recorded, transcribed and coded. Their analysis suggests that two 'anchoring cognitive reactions' were likely to facilitate students' learning in a forthcoming sequence on this theme. It also makes it possible to evaluate the importance of a strong obstacle, that is the interpretation of absorption/transmission curves, or the multiplicative aspect of subtractive synthesis. Finally, the students' comments about their feelings at the end of the interview introduce a brief discussion about the benefits and/or frustration in terms of intellectual satisfaction.
Fatigue crack growth in SA508-CL2 steel in a high temperature, high purity water environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, T.L.; Heald, J.D.; Kiss, E.
1974-10-01
Fatigue crack growth tests were conducted with 1 in. plate specimens of SA508-CL 2 steel in room temperature air, 550$sup 0$F air and in a 550$sup 0$F, high purity, water environment. Zero-tension load controlled tests were run at cyclic frequencies as low as 0.037 CPM. Results show that growth rates in the simulated Boiling Water Reactor (BWR) water environment are faster than growth rates observed in 550$sup 0$F air and these rates are faster than the room temperature rate. In the BWR water environment, lowering the cyclic frequency from 0.37 to 0.037 CPM caused only a slight increase in themore » fatigue crack growth rate. All growth rates measured in these tests were below the upper bound design curve presented in Section XI of the ASME Code. (auth)« less
Material Model Evaluation of a Composite Honeycomb Energy Absorber
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.
2012-01-01
A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.
Synthetic aperture radar range - Azimuth ambiguity design and constraints
NASA Technical Reports Server (NTRS)
Mehlis, J. G.
1980-01-01
Problems concerning the design of a system for mapping a planetary surface with a synthetic aperture radar (SAR) are considered. Given an ambiguity level, resolution, and swath width, the problems are related to the determination of optimum antenna apertures and the most suitable pulse repetition frequency (PRF). From the set of normalized azimuth ambiguity ratio curves, the designer can arrive at the azimuth antenna length, and from the sets of normalized range ambiguity ratio curves, he can arrive at the range aperture length or pulse repetition frequency. A procedure based on this design method is shown in an example. The normalized curves provide results for a SAR using a uniformly or cosine weighted rectangular antenna aperture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, Kenji; Ebata, Shigeo
1997-07-01
This paper summarizes the current and anticipated use of the thermal-hydraulic and neutronic codes for the BWR transient and accident analyses in Japan. The codes may be categorized into the licensing codes and the best estimate codes for the BWR transient and accident analyses. Most of the licensing codes have been originally developed by General Electric. Some codes have been updated based on the technical knowledge obtained in the thermal hydraulic study in Japan, and according to the BWR design changes. The best estimates codes have been used to support the licensing calculations and to obtain the phenomenological understanding ofmore » the thermal hydraulic phenomena during a BWR transient or accident. The best estimate codes can be also applied to a design study for a next generation BWR to which the current licensing model may not be directly applied. In order to rationalize the margin included in the current BWR design and develop a next generation reactor with appropriate design margin, it will be required to improve the accuracy of the thermal-hydraulic and neutronic model. In addition, regarding the current best estimate codes, the improvement in the user interface and the numerics will be needed.« less
The use of Tcl and Tk to improve design and code reutilization
NASA Technical Reports Server (NTRS)
Rodriguez, Lisbet; Reinholtz, Kirk
1995-01-01
Tcl and Tk facilitate design and code reuse in the ZIPSIM series of high-performance, high-fidelity spacecraft simulators. Tcl and Tk provide a framework for the construction of the Graphical User Interfaces for the simulators. The interfaces are architected such that a large proportion of the design and code is used for several applications, which has reduced design time and life-cycle costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
Principles for designing proteins with cavities formed by curved β sheets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcos, Enrique; Basanta, Benjamin; Chidyausiku, Tamuka M.
Active sites and ligand-binding cavities in native proteins are often formed by curved β sheets, and the ability to control β-sheet curvature would allow design of binding proteins with cavities customized to specific ligands. Toward this end, we investigated the mechanisms controlling β-sheet curvature by studying the geometry of β sheets in naturally occurring protein structures and folding simulations. The principles emerging from this analysis were used to design, de novo, a series of proteins with curved β sheets topped with α helices. Nuclear magnetic resonance and crystal structures of the designs closely match the computational models, showing that β-sheetmore » curvature can be controlled with atomic-level accuracy. Our approach enables the design of proteins with cavities and provides a route to custom design ligand-binding and catalytic sites.« less
NASA Astrophysics Data System (ADS)
Martí-Vidal, I.; Marcaide, J. M.; Alberdi, A.; Guirado, J. C.; Pérez-Torres, M. A.; Ros, E.
2011-02-01
We report on a simultaneous modelling of the expansion and radio light curves of the supernova SN1993J. We developed a simulation code capable of generating synthetic expansion and radio light curves of supernovae by taking into consideration the evolution of the expanding shock, magnetic fields, and relativistic electrons, as well as the finite sensitivity of the interferometric arrays used in the observations. Our software successfully fits all the available radio data of SN 1993J with a standard emission model for supernovae, which is extended with some physical considerations, such as an evolution in the opacity of the ejecta material, a radial decline in the magnetic fields within the radiating region, and a changing radial density profile for the circumstellar medium starting from day 3100 after the explosion.
NASA Technical Reports Server (NTRS)
Stallcop, James R.; Partridge, Harry; Levin, Eugene
1991-01-01
N2(+) and O2(+) potential energy curves have been constructed by combining measured data with the results from electronic structure calculations. These potential curves have been employed to determine accurate charge exchange cross sections, transport cross sections, and collision integrals for ground state N(+)-N and O(+)-O interactions. The cross sections have been calculated from a semiclassical approximation to the scattering using a computer code that fits a spline curve through the discrete potential data and incorporates the proper long-range behavior of the interactions forces. The collision integrals are tabulated for a broad range of temperatures 250-100,000 K and are intended to reduce the uncertainty in the values of the transport properties of nonequilibrium air, particularly at high temperatures.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
An Analytical Approach to Obtaining JWL Parameters from Cylinder Tests
NASA Astrophysics Data System (ADS)
Sutton, Ben; Ferguson, James
2015-06-01
An analytical method for determining parameters for the JWL equation of state (EoS) from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated parameters and pressure-volume (p-V) curves agree with those produced by hydro-code modelling. The calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-V curve. The calculated and model energies are 8.64 GPa and 8.76 GPa respectively.
13 CFR 134.304 - Commencement of appeals from size determinations and NAICS code designations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Commencement of appeals from size determinations and NAICS code designations. 134.304 Section 134.304 Business Credit and Assistance SMALL BUSINESS... Appeals From Size Determinations and NAICS Code Designations § 134.304 Commencement of appeals from size...
Photometric Mapping of Two Kepler Eclipsing Binaries: KIC11560447 and KIC8868650
NASA Astrophysics Data System (ADS)
Senavci, Hakan Volkan; Özavci, I.; Isik, E.; Hussain, G. A. J.; O'Neal, D. O.; Yilmaz, M.; Selam, S. O.
2018-04-01
We present the surface maps of two eclipsing binary systems KIC11560447 and KIC8868650, using the Kepler light curves covering approximately 4 years. We use the code DoTS, which is based on maximum entropy method in order to reconstruct the surface maps. We also perform numerical tests of DoTS to check the ability of the code in terms of tracking phase migration of spot clusters. The resulting latitudinally averaged maps of KIC11560447 show that spots drift towards increasing orbital longitudes, while the overall behaviour of spots on KIC8868650 drifts towards decreasing latitudes.
EVEREST: Pixel Level Decorrelation of K2 Light Curves
NASA Astrophysics Data System (ADS)
Luger, Rodrigo; Agol, Eric; Kruse, Ethan; Barnes, Rory; Becker, Andrew; Foreman-Mackey, Daniel; Deming, Drake
2016-10-01
We present EPIC Variability Extraction and Removal for Exoplanet Science Targets (EVEREST), an open-source pipeline for removing instrumental noise from K2 light curves. EVEREST employs a variant of pixel level decorrelation to remove systematics introduced by the spacecraft’s pointing error and a Gaussian process to capture astrophysical variability. We apply EVEREST to all K2 targets in campaigns 0-7, yielding light curves with precision comparable to that of the original Kepler mission for stars brighter than {K}p≈ 13, and within a factor of two of the Kepler precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that EVEREST achieves the highest average precision of any of these pipelines for unsaturated K2 stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The EVEREST pipeline can also easily be applied to future surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The EVEREST light curves and the source code used to generate them are freely available online.
Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M.J.; Finkenthal, M.; Soukhanovskii, V.
In present magnetically confined fusion devices, high and intermediate {ital Z} impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high {ital Z} plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introducedmore » through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3{endash}1600 {Angstrom}), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.) {copyright} {ital 1999 American Institute of Physics.}« less
On a framework for generating PoD curves assisted by numerical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in
2015-03-31
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less
On a framework for generating PoD curves assisted by numerical simulations
NASA Astrophysics Data System (ADS)
Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar
2015-03-01
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.
National Combustion Code Parallel Performance Enhancements
NASA Technical Reports Server (NTRS)
Quealy, Angela; Benyo, Theresa (Technical Monitor)
2002-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.
[Application of melting curve to analyze genotype of Duffy blood group antigen Fy-a/b].
Chen, Xue; Zhou, Chang-Hua; Hong, Ying; Gong, Tian-Xiang
2012-12-01
This study was aimed to establish the real-time multiple-PCR with melting curve analysis for Duffy blood group Fy-a/b genotyping. According to the sequence of mRNA coding for β-actin and Fy-a/b, the primers of β-actin and Fy-a/b were synthesized. The real-time multiple-PCR with melting curve analysis for Fy-a/b genotyping was established. The Fy-a/b genotyping of 198 blood donors in Chinese Chengdu area has been investigated by melting curve analysis and PCR-SSP. The results showed that the results of Fy-a/b genotype by melting curve analysis were consistent with PCR-SSP. In all of 198 donors in Chinese Chengdu, 178 were Fy(a) (+) (89.9%), 19 were Fy(a) (+) Fy(b) (+) (9.6%), and 1 was Fy(b) (+) (0.5%). The gene frequency of Fy(a) was 0.947, while that of Fy(b) was 0.053. It is concluded that the genotyping method of Duffy blood group with melting curve analysis is established, which can be used as a high-throughput screening tool for Duffy blood group genotyping; and the Fy(a) genotype is the major of Duffy blood group of donors in Chinese Chengdu area.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
NASA Technical Reports Server (NTRS)
Rodal, J. J. A.; French, S. E.; Witmer, E. A.; Stagliano, T. R.
1979-01-01
The CIVM-JET 4C computer program for the 'finite strain' analysis of 2 d transient structural responses of complete or partial rings and beams subjected to fragment impact stored on tape as a series of individual files. Which subroutines are found in these files are described in detail. All references to the CIVM-JET 4C program are made assuming that the user has a copy of NASA CR-134907 (ASRL TR 154-9) which serves as a user's guide to (1) the CIVM-JET 4B computer code and (2) the CIVM-JET 4C computer code 'with the use of the modified input instructions' attached hereto.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Development and presentation of a roadway and roadside design course : final report, December 2009.
DOT National Transportation Integrated Search
2009-12-01
The overall goal of this course is to provide training in the elements of geometric highway : design. Specific course objectives are: : To review the geometry of horizontal and vertical alignment including simple circular : curves, compound curve...
Guidelines for design and safe handling of curved I-shaped steel girders.
DOT National Transportation Integrated Search
2010-02-01
The purpose of this set of guidelines is to summarize recommendations from work : completed as part of the Texas Department of Transportation (TxDOT) Research Project 0-5574 : entitled "Curved Plate Girder Design for Safe and Economic Construction." ...
Investigation of Liner Characteristics in the NASA Langley Curved Duct Test Rig
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Brown, Martha C.; Watson, Willie R.; Jones, Michael G.
2007-01-01
The Curved Duct Test Rig (CDTR), which is designed to investigate propagation of sound in a duct with flow, has been developed at NASA Langley Research Center. The duct incorporates an adaptive control system to generate a tone in the duct at a specific frequency with a target Sound Pressure Level and a target mode shape. The size of the duct, the ability to isolate higher order modes, and the ability to modify the duct configuration make this rig unique among experimental duct acoustics facilities. An experiment is described in which the facility performance is evaluated by measuring the sound attenuation by a sample duct liner. The liner sample comprises one wall of the liner test section. Sound in tones from 500 to 2400 Hz, with modes that are parallel to the liner surface of order 0 to 5, and that are normal to the liner surface of order 0 to 2, can be generated incident on the liner test section. Tests are performed in which sound is generated without axial flow in the duct and with flow at a Mach number of 0.275. The attenuation of the liner is determined by comparing the sound power in a hard wall section downstream of the liner test section to the sound power in a hard wall section upstream of the liner test section. These experimentally determined attenuations are compared to numerically determined attenuations calculated by means of a finite element analysis code. The code incorporates liner impedance values educed from measured data from the NASA Langley Grazing Incidence Tube, a test rig that is used for investigating liner performance with flow and with (0,0) mode incident grazing. The analytical and experimental results compare favorably, indicating the validity of the finite element method and demonstrating that finite element prediction tools can be used together with experiment to characterize the liner attenuation.
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
Unitals and ovals of symmetric block designs in LDPC and space-time coding
NASA Astrophysics Data System (ADS)
Andriamanalimanana, Bruno R.
2004-08-01
An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.
NASA Technical Reports Server (NTRS)
Rhode, M. N.; Engelund, Walter C.; Mendenhall, Michael R.
1995-01-01
Experimental longitudinal and lateral-directional aerodynamic characteristics were obtained for the Pegasus and Pegasus XL configurations over a Mach number range from 1.6 to 6 and angles of attack from -4 to +24 degrees. Angle of sideslip was varied from -6 to +6 degrees, and control surfaces were deflected to obtain elevon, aileron, and rudder effectiveness. Experimental data for the Pegasus configuration are compared with engineering code predictions performed by Nielsen Engineering & Research, Inc. (NEAR) in the aerodynamic design of the Pegasus vehicle, and with results from the Aerodynamic Preliminary Analysis System (APAS) code. Comparisons of experimental results are also made with longitudinal flight data from Flight #2 of the Pegasus vehicle. Results show that the longitudinal aerodynamic characteristics of the Pegasus and Pegasus XL configurations are similar, having the same lift-curve slope and drag levels across the Mach number range. Both configurations are longitudinally stable, with stability decreasing towards neutral levels as Mach number increases. Directional stability is negative at moderate to high angles of attack due to separated flow over the vertical tail. Dihedral effect is positive for both configurations, but is reduced 30-50 percent for the Pegasus XL configuration because of the horizontal tail anhedral. Predicted longitudinal characteristics and both longitudinal and lateral-directional control effectiveness are generally in good agreement with experiment. Due to the complex leeside flowfield, lateral-directional characteristics are not as well predicted by the engineering codes. Experiment and flight data are in good agreement across the Mach number range.
Through the Past Decade: How Advanced Energy Design Guides have influenced the Design Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bing; Athalye, Rahul A.
Advanced Energy Design Guides (AEDGs) were originally developed intended to provide a simple approach to building professionals seeking energy efficient building designs better than ASHRAE Standard 90.1. Since its first book was released in 2004, the AEDG series provided inspiration for the design industry and were seen by designers as a starting point for buildings that wished to go beyond minimum codes and standards. In addition, U.S. Department of Energy’s successful Commercial Building Partnerships (CBP) program leveraged many of the recommendations from the AEDGs to achieve 50% energy savings over ASHRAE Standard 90.1-2004 for prototypical designs of large commercial entitiesmore » in the retail, banking and lodging sectors. Low-energy technologies and strategies developed during the CBP process have been applied by commercial partners throughout their national portfolio of buildings. Later, the AEDGs served as the perfect platform for both Standard 90.1 and ASHRAE’s high performance buildings standard, Standard 189.1. What was high performance a few years ago, however, has become minimum code today. Indeed, most of the prescriptive envelope component requirements in ASHRAE Standard 90.1-2013 are values recommended in the 50% AEDGs several years ago. Similarly, AEDG strategies and recommendations have penetrated the lighting and HVAC sections of both Standard 189.1 and Standard 90.1. Finally, as we look to the future of codes and standards, the AEDGs are serving as a blueprint for how minimum code requirements could be expressed. By customizing codes to specific building types, design strategies tailored for individual buildings could be prescribed as minimum code, just like in the AEDGs. This paper describes the impact that AEDGs have had over the last decade on the design industry and how they continue to influence the future of codes and Standards. From design professionals to code officials, everyone in the building industry has been affected by the AEDGs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick
2006-07-01
Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less
Wheelset curving guidance using H∞ control
NASA Astrophysics Data System (ADS)
Qazizadeh, Alireza; Stichel, Sebastian; Feyzmahdavian, Hamid Reza
2018-03-01
This study shows how to design an active suspension system for guidance of a rail vehicle wheelset in curve. The main focus of the study is on designing the controller and afterwards studying its effect on the wheel wear behaviour. The controller is designed based on the closed-loop transfer function shaping method and ? control strategy. The study discusses designing of the controller for both nominal and uncertain plants and considers both stability and performance. The designed controllers in Simulink are then applied to the vehicle model in Simpack to study the wheel wear behaviour in curve. The vehicle type selected for this study is a two-axle rail vehicle. This is because this type of vehicle is known to have very poor curving performance and high wheel wear. On the other hand, the relative simpler structure of this type of vehicle compared to bogie vehicles make it a more economic choice. Hence, equipping this type of vehicle with the active wheelset steering is believed to show high enough benefit to cost ratio to remain attractive to rail vehicle manufacturers and operators.
Intercept Centering and Time Coding in Latent Difference Score Models
ERIC Educational Resources Information Center
Grimm, Kevin J.
2012-01-01
Latent difference score (LDS) models combine benefits derived from autoregressive and latent growth curve models allowing for time-dependent influences and systematic change. The specification and descriptions of LDS models include an initial level of ability or trait plus an accumulation of changes. A limitation of this specification is that the…
1994-12-01
Army Research Laboratory ATTN: AMSRL-WT-PA Aberdeen Proving Ground, MD 21005-5066 9 . SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING...8 1.5 DISTANCE vs. TIME CALCULATION ........................................... 9 2. D ISCU SSIO N...21 Figure 9 : Comparison of calculated thrust curves ..................................... 32 v
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Fatigue Life Analysis of Tapered Hybrid Composite Flexbeams
NASA Technical Reports Server (NTRS)
Murri, Gretchen B.; Schaff, Jeffery R.; Dobyns, Alan L.
2002-01-01
Nonlinear-tapered flexbeam laminates from a full-size composite helicopter rotor hub flexbeam were tested under combined constant axial tension and cyclic bending loads. The two different graphite/glass hybrid configurations tested under cyclic loading failed by delamination in the tapered region. A 2-D finite element model was developed which closely approximated the flexbeam geometry, boundary conditions, and loading. The analysis results from two geometrically nonlinear finite element codes, ANSYS and ABAQUS, are presented and compared. Strain energy release rates (G) obtained from the above codes using the virtual crack closure technique (VCCT) at a resin crack location in the flexbeams are presented for both hybrid material types. These results compare well with each other and suggest that the initial delamination growth from the resin crack toward the thick region of the flexbeam is strongly mode II. The peak calculated G values were used with material characterization data to calculate fatigue life curves and compared with test data. A curve relating maximum surface strain to number of loading cycles at delamination onset compared reasonably well with the test results.
National Combustion Code: Parallel Performance
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2001-01-01
This report discusses the National Combustion Code (NCC). The NCC is an integrated system of codes for the design and analysis of combustion systems. The advanced features of the NCC meet designers' requirements for model accuracy and turn-around time. The fundamental features at the inception of the NCC were parallel processing and unstructured mesh. The design and performance of the NCC are discussed.
Planned Missing Designs to Optimize the Efficiency of Latent Growth Parameter Estimates
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Jia, Fan; Wu, Wei; Little, Todd D.
2014-01-01
We examine the performance of planned missing (PM) designs for correlated latent growth curve models. Using simulated data from a model where latent growth curves are fitted to two constructs over five time points, we apply three kinds of planned missingness. The first is item-level planned missingness using a three-form design at each wave such…
Modifications to risk-targeted seismic design maps for subduction and near-fault hazards
Liel, Abbie B.; Luco, Nicolas; Raghunandan, Meera; Champion, C.; Haukaas, Terje
2015-01-01
ASCE 7-10 introduced new seismic design maps that define risk-targeted ground motions such that buildings designed according to these maps will have 1% chance of collapse in 50 years. These maps were developed by iterative risk calculation, wherein a generic building collapse fragility curve is convolved with the U.S. Geological Survey hazard curve until target risk criteria are met. Recent research shows that this current approach may be unconservative at locations where the tectonic environment is much different than that used to develop the generic fragility curve. This study illustrates how risk-targeted ground motions at selected sites would change if generic building fragility curve and hazard assessment were modified to account for seismic risk from subduction earthquakes and near-fault pulses. The paper also explores the difficulties in implementing these changes.
The design of wavefront coded imaging system
NASA Astrophysics Data System (ADS)
Lan, Shun; Cen, Zhaofeng; Li, Xiaotong
2016-10-01
Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.
Computational approach to compact Riemann surfaces
NASA Astrophysics Data System (ADS)
Frauendiener, Jörg; Klein, Christian
2017-01-01
A purely numerical approach to compact Riemann surfaces starting from plane algebraic curves is presented. The critical points of the algebraic curve are computed via a two-dimensional Newton iteration. The starting values for this iteration are obtained from the resultants with respect to both coordinates of the algebraic curve and a suitable pairing of their zeros. A set of generators of the fundamental group for the complement of these critical points in the complex plane is constructed from circles around these points and connecting lines obtained from a minimal spanning tree. The monodromies are computed by solving the defining equation of the algebraic curve on collocation points along these contours and by analytically continuing the roots. The collocation points are chosen to correspond to Chebychev collocation points for an ensuing Clenshaw-Curtis integration of the holomorphic differentials which gives the periods of the Riemann surface with spectral accuracy. At the singularities of the algebraic curve, Puiseux expansions computed by contour integration on the circles around the singularities are used to identify the holomorphic differentials. The Abel map is also computed with the Clenshaw-Curtis algorithm and contour integrals. As an application of the code, solutions to the Kadomtsev-Petviashvili equation are computed on non-hyperelliptic Riemann surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukushima, Takuma; Fujita, Yutaka; To, Sho
We numerically simulate the gamma-ray burst (GRB) afterglow emission with a one-zone time-dependent code. The temporal evolutions of the decelerating shocked shell and energy distributions of electrons and photons are consistently calculated. The photon spectrum and light curves for an observer are obtained taking into account the relativistic propagation of the shocked shell and the curvature of the emission surface. We find that the onset time of the afterglow is significantly earlier than the previous analytical estimate. The analytical formulae of the shock propagation and light curve for the radiative case are also different from our results. Our results showmore » that even if the emission mechanism is switching from synchrotron to synchrotron self-Compton, the gamma-ray light curves can be a smooth power law, which agrees with the observed light curve and the late detection of a 32 GeV photon in GRB 130427A. The uncertainty of the model parameters obtained with the analytical formula is discussed, especially in connection with the closure relation between spectral index and decay index.« less
Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.
Uzun, Vassilya; Bilgin, Sami
2016-01-01
For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.
PHOTOMETRIC ANALYSIS OF HS Aqr, EG Cep, VW LMi, AND DU Boo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djurasevic, G.; Latkovic, O.; Bastuerk, Oe.
2013-03-15
We analyze new multicolor light curves for four close late-type binaries: HS Aqr, EG Cep, VW LMi, and DU Boo, in order to determine the orbital and physical parameters of the systems and estimate the distances. The analysis is done using the modeling code of G. Djurasevic, and is based on up-to-date measurements of spectroscopic elements. All four systems have complex, asymmetric light curves that we model by including bright or dark spots on one or both components. Our findings indicate that HS Aqr and EG Cep are in semi-detached, while VW LMi and DU Boo are in overcontact configurations.
Local-area simulations of rotating compressible convection and associated mean flows
NASA Technical Reports Server (NTRS)
Hurlburt, Neal E.; Brummell, N. H.; Toomre, Juri
1995-01-01
The dynamics of compressible convection within a curved local segment of a rotating spherical shell are considered in relation to the turbulent redistribution of angular momentum within the solar convection zone. Current supercomputers permit fully turbulent flows to be considered within the restricted geometry of local area models. By considering motions in a curvilinear geometry in which the Coriolos parameters vary with latitude, Rossby waves which couple with the turbulent convection are thought of as being possible. Simulations of rotating convection are presented in such a curved local segment of a spherical shell using a newly developed, sixth-order accurate code based on compact finite differences.
Investigation of activation cross-sections of deuteron induced reactions on vanadium up to 40 MeV
NASA Astrophysics Data System (ADS)
Tárkányi, F.; Ditrói, F.; Takács, S.; Hermanne, A.; Baba, M.; Ignatyuk, A. V.
2011-08-01
Experimental excitation functions for deuteron induced reactions up to 40 MeV on natural vanadium were measured with the activation method using a stacked foil irradiation technique. From high resolution gamma spectrometry cross-section data for the production of 51Cr, 48V, 48,47,46Sc and 47Ca were determined. Comparisons with the earlier published data are presented and results for values predicted by different theoretical codes are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental data. Depth distribution curves used for thin layer activation (TLA) are also presented.
Flowers, Natalie L
2010-01-01
CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.
UNIPIC code for simulations of high power microwave devices
NASA Astrophysics Data System (ADS)
Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze
2009-03-01
In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.
NASA Astrophysics Data System (ADS)
Fisher, L. E.; Lynch, K. A.; Fernandes, P. A.; Bekkeng, T. A.; Moen, J.; Zettergren, M.; Miceli, R. J.; Powell, S.; Lessard, M. R.; Horak, P.
2016-04-01
The interpretation of planar retarding potential analyzers (RPA) during ionospheric sounding rocket missions requires modeling the thick 3D plasma sheath. This paper overviews the theory of RPAs with an emphasis placed on the impact of the sheath on current-voltage (I-V) curves. It then describes the Petite Ion Probe (PIP) which has been designed to function in this difficult regime. The data analysis procedure for this instrument is discussed in detail. Data analysis begins by modeling the sheath with the Spacecraft Plasma Interaction System (SPIS), a particle-in-cell code. Test particles are traced through the sheath and detector to determine the detector's response. A training set is constructed from these simulated curves for a support vector regression analysis which relates the properties of the I-V curve to the properties of the plasma. The first in situ use of the PIPs occurred during the MICA sounding rocket mission which launched from Poker Flat, Alaska in February of 2012. These data are presented as a case study, providing valuable cross-instrument comparisons. A heritage top-hat thermal ion electrostatic analyzer, called the HT, and a multi-needle Langmuir probe have been used to validate both the PIPs and the data analysis method. Compared to the HT, the PIP ion temperature measurements agree with a root-mean-square error of 0.023 eV. These two instruments agree on the parallel-to-B plasma flow velocity with a root-mean-square error of 130 m/s. The PIP with its field of view aligned perpendicular-to-B provided a density measurement with an 11% error compared to the multi-needle Langmuir Probe. Higher error in the other PIP's density measurement is likely due to simplifications in the SPIS model geometry.
NASA Technical Reports Server (NTRS)
Nishioka, Owen S.
1997-01-01
Defects that develop in welds during the fabrication process are frequently manifested as embedded flaws from lack of fusion or lack of penetration. Fracture analyses of welded structures must be able to assess the effect of such defects on the structural integrity of weldments; however, the transferability of R-curves measured in laboratory specimens to defective structural welds has not been fully examined. In the current study, the fracture behavior of an overmatched butt weld containing a simulated buried, lack-of-penetration defect is studied. A specimen designed to simulate pressure vessel butt welds is considered; namely, a center crack panel specimen, of 1.25 inch by 1.25 inch cross section, loaded in tension. The stress-relieved double-V weld has a yield strength 50% higher than that of the plate material, and displays upper shelf fracture behavior at room temperature. Specimens are precracked, loaded monotonically while load-CMOD measurements are made, then stopped and heat tinted to mark the extent of ductile crack growth. These measurements are compared to predictions made using finite element analysis of the specimens using the fracture mechanics code Warp3D, which models void growth using the Gurson-Tvergaard dilitant plasticity formulation within fixed sized computational cells ahead of the crack front. Calibrating data for the finite element analyses, namely cell size and initial material porosities are obtained by matching computational predictions to experimental results from tests of welded compact tension specimens. The R-curves measured in compact tension specimens are compared to those obtained from multi-specimen weld tests, and conclusions as to the transferability of R-curves is discussed.
Signature-forecasting and early outbreak detection system
Naumova, Elena N.; MacNeill, Ian B.
2008-01-01
SUMMARY Daily disease monitoring via a public health surveillance system provides valuable information on population risks. Efficient statistical tools for early detection of rapid changes in the disease incidence are a must for modern surveillance. The need for statistical tools for early detection of outbreaks that are not based on historical information is apparent. A system is discussed for monitoring cases of infections with a view to early detection of outbreaks and to forecasting the extent of detected outbreaks. We propose a set of adaptive algorithms for early outbreak detection that does not rely on extensive historical recording. We also include knowledge of infection disease epidemiology into forecasts. To demonstrate this system we use data from the largest water-borne outbreak of cryptosporidiosis, which occurred in Milwaukee in 1993. Historical data are smoothed using a loess-type smoother. Upon receipt of a new datum, the smoothing is updated and estimates are made of the first two derivatives of the smooth curve, and these are used for near-term forecasting. Recent data and the near-term forecasts are used to compute a color-coded warning index, which quantify the level of concern. The algorithms for computing the warning index have been designed to balance Type I errors (false prediction of an epidemic) and Type II errors (failure to correctly predict an epidemic). If the warning index signals a sufficiently high probability of an epidemic, then a forecast of the possible size of the outbreak is made. This longer term forecast is made by fitting a ‘signature’ curve to the available data. The effectiveness of the forecast depends upon the extent to which the signature curve captures the shape of outbreaks of the infection under consideration. PMID:18716671
Chen, Yang; Young, Paul M; Fletcher, David F; Chan, Hak Kim; Long, Edward; Lewis, David; Church, Tanya; Traini, Daniela
2015-04-01
To investigate the influence of different actuator nozzle designs on aerosol electrostatic charges and aerosol performances for pressurised metered dose inhalers (pMDIs). Four actuator nozzle designs (flat, curved flat, cone and curved cone) were manufactured using insulating thermoplastics (PET and PTFE) and conducting metal (aluminium) materials. Aerosol electrostatic profiles of solution pMDI formulations containing propellant HFA 134a with different ethanol concentration and/or model drug beclomethasone dipropionate (BDP) were studied using a modified electrical low-pressure impactor (ELPI) for all actuator designs and materials. The mass of the deposited drug was analysed using high performance liquid chromatography (HPLC). Both curved nozzle designs for insulating PET and PTFE actuators significantly influenced aerosol electrostatics and aerosol performance compared with conducting aluminium actuator, where reversed charge polarity and higher throat deposition were observed with pMDI formulation containing BDP. Results are likely due to the changes in plume geometry caused by the curved edge nozzle designs and the bipolar charging nature of insulating materials. This study demonstrated that actuator nozzle designs could significantly influence the electrostatic charges profiles and aerosol drug deposition pattern of pMDI aerosols, especially when using insulating thermoplastic materials where bipolar charging is more dominant.
Haloes gone MAD: The Halo-Finder Comparison Project
NASA Astrophysics Data System (ADS)
Knebe, Alexander; Knollmann, Steffen R.; Muldrew, Stuart I.; Pearce, Frazer R.; Aragon-Calvo, Miguel Angel; Ascasibar, Yago; Behroozi, Peter S.; Ceverino, Daniel; Colombi, Stephane; Diemand, Juerg; Dolag, Klaus; Falck, Bridget L.; Fasel, Patricia; Gardner, Jeff; Gottlöber, Stefan; Hsu, Chung-Hsing; Iannuzzi, Francesca; Klypin, Anatoly; Lukić, Zarija; Maciejewski, Michal; McBride, Cameron; Neyrinck, Mark C.; Planelles, Susana; Potter, Doug; Quilis, Vicent; Rasera, Yann; Read, Justin I.; Ricker, Paul M.; Roy, Fabrice; Springel, Volker; Stadel, Joachim; Stinson, Greg; Sutter, P. M.; Turchaninov, Victor; Tweed, Dylan; Yepes, Gustavo; Zemp, Marcel
2011-08-01
We present a detailed comparison of fundamental dark matter halo properties retrieved by a substantial number of different halo finders. These codes span a wide range of techniques including friends-of-friends, spherical-overdensity and phase-space-based algorithms. We further introduce a robust (and publicly available) suite of test scenarios that allow halo finder developers to compare the performance of their codes against those presented here. This set includes mock haloes containing various levels and distributions of substructure at a range of resolutions as well as a cosmological simulation of the large-scale structure of the universe. All the halo-finding codes tested could successfully recover the spatial location of our mock haloes. They further returned lists of particles (potentially) belonging to the object that led to coinciding values for the maximum of the circular velocity profile and the radius where it is reached. All the finders based in configuration space struggled to recover substructure that was located close to the centre of the host halo, and the radial dependence of the mass recovered varies from finder to finder. Those finders based in phase space could resolve central substructure although they found difficulties in accurately recovering its properties. Through a resolution study we found that most of the finders could not reliably recover substructure containing fewer than 30-40 particles. However, also here the phase-space finders excelled by resolving substructure down to 10-20 particles. By comparing the halo finders using a high-resolution cosmological volume, we found that they agree remarkably well on fundamental properties of astrophysical significance (e.g. mass, position, velocity and peak of the rotation curve). We further suggest to utilize the peak of the rotation curve, vmax, as a proxy for mass, given the arbitrariness in defining a proper halo edge. Airport code for Madrid, Spain
Güiza, Fabian; Depreitere, Bart; Piper, Ian; Citerio, Giuseppe; Chambers, Iain; Jones, Patricia A; Lo, Tsz-Yan Milly; Enblad, Per; Nillson, Pelle; Feyen, Bart; Jorens, Philippe; Maas, Andrew; Schuhmann, Martin U; Donald, Rob; Moss, Laura; Van den Berghe, Greet; Meyfroidt, Geert
2015-06-01
To assess the impact of the duration and intensity of episodes of increased intracranial pressure on 6-month neurological outcome in adult and paediatric traumatic brain injury. Analysis of prospectively collected minute-by-minute intracranial pressure and mean arterial blood pressure data of 261 adult and 99 paediatric traumatic brain injury patients from multiple European centres. The relationship of episodes of elevated intracranial pressure (defined as a pressure above a certain threshold during a certain time) with 6-month Glasgow Outcome Scale was visualized in a colour-coded plot. The colour-coded plot illustrates the intuitive concept that episodes of higher intracranial pressure can only be tolerated for shorter durations: the curve that delineates the duration and intensity of those intracranial pressure episodes associated with worse outcome is an approximately exponential decay curve. In children, the curve resembles that of adults, but the delineation between episodes associated with worse outcome occurs at lower intracranial pressure thresholds. Intracranial pressures above 20 mmHg lasting longer than 37 min in adults, and longer than 8 min in children, are associated with worse outcomes. In a multivariate model, together with known baseline risk factors for outcome in severe traumatic brain injury, the cumulative intracranial pressure-time burden is independently associated with mortality. When cerebrovascular autoregulation, assessed with the low-frequency autoregulation index, is impaired, the ability to tolerate elevated intracranial pressures is reduced. When the cerebral perfusion pressure is below 50 mmHg, all intracranial pressure insults, regardless of duration, are associated with worse outcome. The intracranial pressure-time burden associated with worse outcome is visualised in a colour-coded plot. In children, secondary injury occurs at lower intracranial pressure thresholds as compared to adults. Impaired cerebrovascular autoregulation reduces the ability to tolerate intracranial pressure insults. Thus, 50 mmHg might be the lower acceptable threshold for cerebral perfusion pressure.
A transonic-small-disturbance wing design methodology
NASA Technical Reports Server (NTRS)
Phillips, Pamela S.; Waggoner, Edgar G.; Campbell, Richard L.
1988-01-01
An automated transonic design code has been developed which modifies an initial airfoil or wing in order to generate a specified pressure distribution. The design method uses an iterative approach that alternates between a potential-flow analysis and a design algorithm that relates changes in surface pressure to changes in geometry. The analysis code solves an extended small-disturbance potential-flow equation and can model a fuselage, pylons, nacelles, and a winglet in addition to the wing. A two-dimensional option is available for airfoil analysis and design. Several two- and three-dimensional test cases illustrate the capabilities of the design code.
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...
2018-02-26
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
Creative Tiling: A Story of 1000-and-1 Curves
ERIC Educational Resources Information Center
Al-Darwish, Nasir
2012-01-01
We describe a procedure that utilizes symmetric curves for building artistic tiles. One particular curve was found to mesh nicely with hundreds other curves, resulting in eye-catching tiling designs. The results of our work serve as a good example of using ideas from 2-D graphics and algorithms in a practical web-based application.
Design of a Low Aspect Ratio Transonic Compressor Stage Using CFD Techniques
NASA Technical Reports Server (NTRS)
Sanger, Nelson L.
1994-01-01
A transonic compressor stage has been designed for the Naval Postgraduate School Turbopropulsion Laboratory. The design relied heavily on CFD techniques while minimizing conventional empirical design methods. The low aspect ratio (1.2) rotor has been designed for a specific head ratio of .25 and a tip relative inlet Mach number of 1.3. Overall stage pressure ratio is 1.56. The rotor was designed using an Euler code augmented by a distributed body force model to account for viscous effects. This provided a relatively quick-running design tool, and was used for both rotor and stator calculations. The initial stator sections were sized using a compressible, cascade panel code. In addition to being used as a case study for teaching purposes, the compressor stage will be used as a research stage. Detailed measurements, including non-intrusive LDV, will be compared with the design computations, and with the results of other CFD codes, as a means of assessing and improving the computational codes as design tools.
Three-dimensional Monte-Carlo simulation of gamma-ray scattering and production in the atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, D.J.
1989-05-15
Monte Carlo codes have been developed to simulate gamma-ray scattering and production in the atmosphere. The scattering code simulates interactions of low-energy gamma rays (20 to several hundred keV) from an astronomical point source in the atmosphere; a modified code also simulates scattering in a spacecraft. Four incident spectra, typical of gamma-ray bursts, solar flares, and the Crab pulsar, and 511 keV line radiation have been studied. These simulations are consistent with observations of solar flare radiation scattered from the atmosphere. The production code simulates the interactions of cosmic rays which produce high-energy (above 10 MeV) photons and electrons. Itmore » has been used to calculate gamma-ray and electron albedo intensities at Palestine, Texas and at the equator; the results agree with observations in most respects. With minor modifications this code can be used to calculate intensities of other high-energy particles. Both codes are fully three-dimensional, incorporating a curved atmosphere; the production code also incorporates the variation with both zenith and azimuth of the incident cosmic-ray intensity due to geomagnetic effects. These effects are clearly reflected in the calculated albedo by intensity contrasts between the horizon and nadir, and between the east and west horizons.« less
Development of probabilistic design method for annular fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozawa, Takayuki
2007-07-01
The increase of linear power and burn-up during the reactor operation is considered as one measure to ensure the utility of fast reactors in the future; for this the application of annular oxide fuels is under consideration. The annular fuel design code CEPTAR was developed in the Japan Atomic Energy Agency (JAEA) and verified by using many irradiation experiences with oxide fuels. In addition, the probabilistic fuel design code BORNFREE was also developed to provide a safe and reasonable fuel design and to evaluate the design margins quantitatively. This study aimed at the development of a probabilistic design method formore » annular oxide fuels; this was implemented in the developed BORNFREE-CEPTAR code, and the code was used to make a probabilistic evaluation with regard to the permissive linear power. (author)« less
1991-04-01
Boiler and Pressure Vessel Code . Other design requirements are developed from standard safe... Boiler and Pressure Vessel Code . The following three condi- tions constitute the primary design parameters for pressure vessels: (a) Design Working...rules and practices of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code . Section VIII, Division 1 of the ASME
Reliability-based criteria for load and resistance factor design code for wood bridges
Chris Eamon; Andrzej S. Nowak; Michael A. Ritter; Joe Murphy
2000-01-01
Recently AASHTO adopted a load and resistance factor design code for highway bridges. The new code provides a rational basis for the design of steel and concrete structures. However, the calibration was not done for wood bridges. Therefore, there is a need to fill this gap. The development of statistical models for wood bridge structures is discussed. Recent test...
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
Users manual and modeling improvements for axial turbine design and performance computer code TD2-2
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
Evaluation of three coding schemes designed for improved data communication
NASA Technical Reports Server (NTRS)
Snelsire, R. W.
1974-01-01
Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.
1993-01-01
upon designation of DoD Activity Address Code (DoDAAC) or other code coordinated with the value-added network (VAN). Mandatory ISA06 106 Interc.ange...coordinated with the value-added network (VAN). Non-DoD activities use identification code qualified by ISA05 and coordinated with the VAN. Mandatory...designation of DoD Activity Address Code (DoDAAC) or other code coordinated with the value-added network (VAN). Mandatory ISA08 107 Interchange Receiver
The aerodynamics of curved jets and breakaway in Coanda flares, volumes 1 and 2
NASA Astrophysics Data System (ADS)
Senior, Peter
1991-02-01
An investigation was carried out into external-Coanda Effect flares designed by British Petroleum International plc. The phenomenon of interest was breakaway of an underexpanded axisymmetric curved wall jet from the guiding surface due to high blowing pressure. A survey of investigations of similar flows suggested very complex jet fluid dynamics. Strong cell structure including shock waves was present giving buok and discrete compression and bulk dilatation. More expansion was imposed by the radial velocity components. Wall curvature and a rear-facing step added further significant influences. The combination of these factors is known to produce highly non-linear turbulence, and this constitutes a major difficulty for the application of computational methods to the flare. In view of the amount of resources required to eliminate the problems of using a Navier-Stokes code, an economical approach was adopted, matching the Method of Characteristics to various simplified models and an integral boundary layer. In this experimental work, a planar model of the flare was constructed and studied using a wide range of methods in order to achieve accuracy and provide comparability with other work. An axisymmetric model was designed and investigated in a similar manner, so that the influence of this geometry could be clearly distinguished. A full-scale flare was subjected to a restricted range of tests to compare the laboratory results with the industrial application. The results from all the experiments demonstrated good correspondence. The main conclusion was that amalgamation of separation bubbles is crucial for breakaway. These are present long before breakaway, and are strongly reduced by decreasing the cell scale, adding a rear-facing step and axisymmetry, which leads to improved breakaway performance. Although the computational methods did not prove robust enough for all design purposes, they did permit significant insights into the mechanisms of breakaway.
The Aerodynamics of Curved Jets and Breakaway in Coanda Flares.
NASA Astrophysics Data System (ADS)
Senior, Peter
Available from UMI in association with The British Library. Requires signed TDF. An investigation was carried out into external -Coanda Effect flares designed by British Petroleum International plc. The phenomenon of interest was breakaway of an underexpanded axisymmetric curved wall jet from the guiding surface due to high blowing pressure. A survey of investigations of similar flows suggested very complex jet fluid dynamics. Strong cell structure including shock waves was present giving bulk and discrete compression and bulk dilatation. More expansion was imposed by the radial velocity components. Wall curvature and a rear-facing step added further significant influences. The combination of these factors is known to produce highly non-linear turbulence, and this constitutes a major difficulty for the application of computational methods to the flare. In view of the amount of resources required to eliminate the problems of using a Navier-Stokes code, an economical approach was adopted, matching the Method of Characteristics to various simplified models and an integral boundary layer. In the experimental work, a planar model of the flare was constructed and studied using a wide range of methods in order to achieve accuracy and provide comparability with other work. An axisymmetric model was designed and investigated in a similar manner, so that the influence of this geometry could be clearly distinguished. A full -scale flare was subjected to a restricted range of tests to compare the laboratory results with the industrial application. The results from all the experiments demonstrated good correspondence. The main conclusion was that amalgamation of separation bubbles is crucial for breakaway. These are present long before breakaway, and are strongly reduced by decreasing the cell scale, adding a rear-facing step and axisymmetry, which leads to improved breakaway performance. Although the computational methods did not prove robust enough for all design purposes, they did permit significant insights into the mechanism of breakaway.
Design and implementation of a low-cost multichannel seismic noise recorder for array measurements
NASA Astrophysics Data System (ADS)
Soler-Llorens, Juan Luis; Juan Giner-Caturla, Jose; Molina-Palacios, Sergio; Galiana-Merino, Juan Jose; Rosa-Herranz, Julio; Agea-Medina, Noelia
2017-04-01
Soil characterization is the starting point for seismic hazard studies. Currently, the methods based on ambient noise measurements are very used because they are non-invasive methods and relatively easy to implement in urban areas. Among these methods, the analysis of array measurements provides the dispersion curve and subsequently the shear-wave velocity profile associated to the site under study. In this case, we need several sensors recording simultaneously and a data acquisition system with one channel by sensor, what can become the complete equipment unaffordable for small research groups. In this work, we have designed and implemented a low-cost multichannel ambient noise recorder for array measurements. The complete system is based on Arduino, an open source electronic development platform, which allows recording 12 differential input channels simultaneously. Besides, it is complemented with a conditioning circuit that includes an anti-aliasing filter and a selectable gain between 0 and 40dB. The data acquisition is set up through a user-friendly graphical user interface. It is important to note that the electronic scheme as well as the programming code are open hardware and software, respectively, so it allows other researchers to suite the system to their particular requirements. The developed equipment has been tested at several sites around the province of Alicante (southeast of Spain), where the soil characteristics are well-known from previous studies. Array measurements have been taken and after that, the recorded data have been analysed using the frequency-wavenumber (f-k) and the extended spatial autocorrelation (ESAC) methods. The comparison of the obtained dispersion curves with the ones obtained in previous studies shows the suitability of the implemented low-cost system for array measurements.
NASA Technical Reports Server (NTRS)
Pinkel, Benjamin; Deutsch, George C; Morgan, William C
1955-01-01
Stresses om tje root fastenings of turbine blades were appreciably reduced by redesign of the root. The redesign consisted in curving the root to approximately conform to the camber of the airfoil and elimination of the blade platform. Full-scale jet-engine tests at rated speed using cermet blades of the design confirmed the improvement.
Photometric Study of The Solar Type, Total Eclipsing Binary, TYC 2853-18-1
NASA Astrophysics Data System (ADS)
Samec, Ronald G.; Figg, E. R.; Faulkner, D.; Van Hamme, W.
2009-12-01
We present an analysis of the Solar-Type eclipsing binary, TYC 2853-18-1 (Persei), based on observations taken at the National Undergraduate Research Observatory (NURO) and the Southeastern Association for Research in Astronomy (SARA) in the Fall, 2007 and Spring, 2008. Light curves, a period study and a synthetic light curve solution are presented for this variable which was recently discovered by TYCHO as an eclipsing binary (2006, IBVS 5700). Our CCD observations of TYC 2853-18-1 [GSC 2853 0018, RA(2000) = 02h 47m 07.996s, DEC(2000) = +41° 22’ 32.80"] were taken on 20,27 December, 2007 at Lowell Observatory with the 0.81-m reflector with NURO time and 25 November, 3 December, 2007 and 19 February, 2008 via remote observing from Kitt Peak with SARA. NURO observations were take with the thermoelectrically cooled (<-100C) 2KX2K CCD NASACAM. Standard BVRcIc Johnson-Cousins filters were used. Our light curve solution was calculated with the 2004 Wilson code. Mean times of eclipse include, HJDMinI = 2454516.6131(±0.0005), 2454440.52974(±0.00008), 2454438.7605 (±0.0001), 2454462.6464 (±0.0003), HJDMinII = 2454455.71985 (±0.00060), 255462.7943 (±0.0002). These, including the epoch by ROTSE (2006, IBVS 5699) and the epoch calculated by the Wilson code, yielded the following ephemeris: HJD Hel Min I =2451370.8753(±.0.0010)d + 0.2949039 (±0.0000001)E Our unspotted Wilson code solution reveals TYC 2853-18-1 to be a W-type W UMa contact binary with unequal eclipse depths (amplitudes are 0.72 and 0.61 mags in V). It has shallow contact (8% fill-out) and a brief, but total eclipse. Its curves dictate a mass ratio of 2.62±0.01, a component temperature difference of only 73±5 ° K and an inclination of 82.0±0.2°. Spot activity is indicated by night to night variations. We wish to thank the NURO and SARA for their allocation of observing time, as well as NASA and the AAS for their support in paying for travel and publication expenses.
NASA Astrophysics Data System (ADS)
Hedlund, Anne; Sandquist, Eric L.; Arentoft, Torben; Brogaard, Karsten; Grundahl, Frank; Stello, Dennis; Bedin, Luigi R.; Libralato, Mattia; Malavolta, Luca; Nardiello, Domenico; Molenda-Zakowicz, Joanna; Vanderburg, Andrew
2018-06-01
V1178 Tau is a double-lined spectroscopic eclipsing binary in NGC1817, one of the more massive clusters observed in the K2 mission. We have determined the orbital period (P = 2.20 d) for the first time, and we model radial velocity measurements from the HARPS and ALFOSC spectrographs, light curves collected by Kepler, and ground based light curves using the Eclipsing Light Curve code (ELC, Orosz & Hauschildt 2000). We present masses and radii for the stars in the binary, allowing for a reddening-independent means of determining the cluster age. V1178 Tau is particularly useful for calculating the age of the cluster because the stars are close to the cluster turnoff, providing a more precise age determination. Furthermore, because one of the stars in the binary is a delta Scuti variable, the analysis provides improved insight into their pulsations.
VizieR Online Data Catalog: Photometric analysis of contact binaries (Lapasset+ 1996)
NASA Astrophysics Data System (ADS)
Lapasset, E.; Gomez, M.; Farinas, R.
1996-09-01
We present BV light-curve synthetic analyses of three short-period contact (W UMa) binaries: HY Pavonis (P=~0.35days), AW Virginis (P=~0.35days), and BP Velorum (P=~0.26days). Different possible configurations for wide range of the mass ratio were explored in each case making use of the Wilson-Devinney code. The photometric parameters of the systems were determined from the synthetic light-curve solutions that best fit the observations. AW Vir has two components of very similar temperatures and therefore the subtype (A or W) remains undetermined. HY Pav and BP Vel are best modeled by W-type configurations and the asymmetries in the light curves are reproduced by introducing cool spots on the more massive secondary components. Although BP Vel lies in the region of the open cluster Cr 173, its distance modulus, in principle, rules it out as a cluster member. (6 data files).
Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Everhart, Joel L.
1987-01-01
This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.
Evaluation of the Structural Performance of CTS Rapid Set Concrete Mix
2016-08-01
June 2015, research was conducted at the U.S. Army Engineer Research and Development Center (ERDC) in Vicksburg, MS, to develop pavement design curves...using the Department of Defense’s (DoD) rigid pavement design method. Results indicate that the DoD’s rigid pavement design criteria are conservative...1.2 Objective and scope The objective of the research presented in this report was to develop pavement design curves relating CTS Rapid Set
High dynamic range coding imaging system
NASA Astrophysics Data System (ADS)
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
NASA Astrophysics Data System (ADS)
Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung
2011-08-01
In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.
Analysis of Low-Temperature Utilization of Geothermal Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Brian
Full realization of the potential of what might be considered “low-grade” geothermal resources will require that we examine many more uses for the heat than traditional electricity generation. To demonstrate that geothermal energy truly has the potential to be a national energy source we will be designing, assessing, and evaluating innovative uses for geothermal-produced water such as hybrid biomass-geothermal cogeneration of electricity and district heating and efficiency improvements to the use of cellulosic biomass in addition to utilization of geothermal in district heating for community redevelopment projects. The objectives of this project were: 1) to perform a techno-economic analysis ofmore » the integration and utilization potential of low-temperature geothermal sources. Innovative uses of low-enthalpy geothermal water were designed and examined for their ability to offset fossil fuels and decrease CO2 emissions. 2) To perform process optimizations and economic analyses of processes that can utilize low-temperature geothermal fluids. These processes included electricity generation using biomass and district heating systems. 3) To scale up and generalize the results of three case study locations to develop a regionalized model of the utilization of low-temperature geothermal resources. A national-level, GIS-based, low-temperature geothermal resource supply model was developed and used to develop a series of national supply curves. We performed an in-depth analysis of the low-temperature geothermal resources that dominate the eastern half of the United States. The final products of this study include 17 publications, an updated version of the cost estimation software GEOPHIRES, and direct-use supply curves for low-temperature utilization of geothermal resources. The supply curves for direct use geothermal include utilization from known hydrothermal, undiscovered hydrothermal, and near-hydrothermal EGS resources and presented these results at the Stanford Geothermal Workshop. We also have incorporated our wellbore model into TOUGH2-EGS and began coding TOUGH2-EGS with the wellbore model into GEOPHIRES as a reservoir thermal drawdown option. Additionally, case studies for the WVU and Cornell campuses were performed to assess the potential for district heating and cooling at these two eastern U.S. sites.« less
Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes
NASA Astrophysics Data System (ADS)
Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.
2018-03-01
A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.
Development of an Object-Oriented Turbomachinery Analysis Code within the NPSS Framework
NASA Technical Reports Server (NTRS)
Jones, Scott M.
2014-01-01
During the preliminary or conceptual design phase of an aircraft engine, the turbomachinery designer has a need to estimate the effects of a large number of design parameters such as flow size, stage count, blade count, radial position, etc. on the weight and efficiency of a turbomachine. Computer codes are invariably used to perform this task however, such codes are often very old, written in outdated languages with arcane input files, and rarely adaptable to new architectures or unconventional layouts. Given the need to perform these kinds of preliminary design trades, a modern 2-D turbomachinery design and analysis code has been written using the Numerical Propulsion System Simulation (NPSS) framework. This paper discusses the development of the governing equations and the structure of the primary objects used in OTAC.
New features in the design code Tlie
NASA Astrophysics Data System (ADS)
van Zeijts, Johannes
1993-12-01
We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan
2016-01-01
A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than −10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices. PMID:27982089
Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan
2016-12-16
A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than -10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Stress-strain relationship of high-strength steel (HSS) reinforcing bars
NASA Astrophysics Data System (ADS)
Anggraini, Retno; Tavio, Raka, I. Gede Putu; Agustiar
2018-05-01
The introduction of High-Strength Steel (HSS) reinforcing bars in reinforced concrete members has gained much attention in recent years and led to many advantages such as construction timesaving. It is also more economical since it can reduce the amount of reinforcing steel bars used in concrete members which in turn alleviates the congestion of reinforcement. Up to present, the building codes, e.g. American Concrete Institute (ACI) 318M-14 and Standard National Indonesia (SNI) 2847:2013, still restrict the use of higher-strength steel reinforcing bars for concrete design up to Grade 420 MPa due to the possible suspected brittle behavior of concrete members. This paper evaluates the characteristics of stress-strain relationships of HSS bars if they are comparable to the characteristics of those of Grade 420 MPa. To achieve the objective of the study, a series of steel bars from various grades (420, 550, 650, and 700 MPa) was selected. Tensile tests of these steel samples were conducted under displacement-controlled mode to capture the complete stress-strain curves and particularly the post-yield response of the steel bars. The results indicate that all the steel bars tested had the actual yield strengths greater than the corresponding specified values. The stress-strain curves of HSS reinforcing bars (Grade 550, 650, and 700 MPa) performed slightly different characteristics with those of Grade 420 MPa.
10 CFR 50.55a - Codes and standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., standard design approval, and standard design certification application under part 52 of this chapter is... section. (a)(1) Structures, systems, and components must be designed, fabricated, erected, constructed... Guide 1.84, Revision 34, “Design, Fabrication, and Materials Code Case Acceptability, ASME Section III...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawls, G.; Newhouse, N.; Rana, M.
2010-04-13
The Boiler and Pressure Vessel Project Team on Hydrogen Tanks was formed in 2004 to develop Code rules to address the various needs that had been identified for the design and construction of up to 15000 psi hydrogen storage vessel. One of these needs was the development of Code rules for high pressure composite vessels with non-load sharing liners for stationary applications. In 2009, ASME approved new Appendix 8, for Section X Code which contains the rules for these vessels. These vessels are designated as Class III vessels with design pressure ranging from 20.7 MPa (3,000 ps)i to 103.4 MPamore » (15,000 psi) and maximum allowable outside liner diameter of 2.54 m (100 inches). The maximum design life of these vessels is limited to 20 years. Design, fabrication, and examination requirements have been specified, included Acoustic Emission testing at time of manufacture. The Code rules include the design qualification testing of prototype vessels. Qualification includes proof, expansion, burst, cyclic fatigue, creep, flaw, permeability, torque, penetration, and environmental testing.« less
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
NASA Technical Reports Server (NTRS)
Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.
1973-01-01
This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.
Effect of forming stresses on the strength of curved laminated beams of loblolly pine
George E. Woodson; Frederick F. Wangaard
1969-01-01
Curvature-stress factors reflecting the effect of forming stresses in producing curved beams of thin vertical-grain laminations of clear wood have been determined for loblolly pine. Strength retention of curved beams decreases with increasing severity of curvature but not to the degree suggested by the Wilson equation commonly used in design. Curved beams loaded on the...
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
NASA Astrophysics Data System (ADS)
Yudov, Yu. V.
2018-03-01
A model is presented of the interphasic heat and mass transfer in the presence of noncondensable gases for the KORSAR/GP design code. This code was developed by FGUP NITI and the special design bureau OKB Gidropress. It was certified by Rostekhnadzor in 2009 for numerical substantiation of the safety of reactor installations with VVER reactors. The model is based on the assumption that there are three types of interphasic heat and mass transfer of the vapor component: vapor condensation or evaporation on the interphase under any thermodynamic conditions of the phases, pool boiling of the liquid superheated above the saturation temperature at the total pressure, and spontaneous condensation in the volume of gas phase supercooled below the saturation temperature at the vapor partial pressure. Condensation and evaporation on the interphase continuously occur in a two-phase flow and control the time response of the interphase heat and mass transfer. Boiling and spontaneous condensation take place only at the metastable condition of the phases and run at a quite high speed. The procedure used for calculating condensation and evaporation on the interphase accounts for the combined diffusion and thermal resistance of mass transfer in all regimes of the two-phase flow. The proposed approach accounts for, in a natural manner, a decrease in the rate of steam condensation (or generation) in the presence of noncondensing components in the gas phase due to a decrease (or increase) in the interphase temperature relative to the saturation temperature at the vapor partial pressure. The model of the interphase heat transfer also accounts for the processes of dissolution or release of noncondensing components in or from the liquid. The gas concentration at the interphase and on the saturation curve is calculated by the Henry law. The mass transfer coefficient in gas dissolution is based on the heat and mass transfer analogy. Results are presented of the verification of the interphase heat and mass transfer used in the KORSAR/GP code based on the data on film condensation of steam-air flows in vertical pipes. The proposed model was also tested by solving a problem of nitrogen release from a supersaturated water solution.
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, X. H.; Fu, J. N.; Zha, Q., E-mail: jnfu@bnu.edu.cn
Time-series photometric observations were made for the SX Phoenicis star XX Cyg between 2007 and 2011 at the Xinglong Station of National Astronomical Observatories of China. With the light curves derived from the new observations, we do not detect any secondary maximum in the descending portion of the light curves of XX Cyg, as reported in some previous work. Frequency analysis of the light curves confirms a fundamental frequency f{sub 0} = 7.4148 cycles day{sup -1} and up to 19 harmonics, 11 of which are newly detected. However, no secondary mode of pulsation is detected from the light curves. Themore » O-C diagram, produced from 46 newly determined times of maximum light combined with those derived from the literature, reveals a continuous period increase with the rate of (1/P)(dP/dt) = 1.19(13) Multiplication-Sign 10{sup -8} yr{sup -1}. Theoretical rates of period change due to the stellar evolution were calculated with a modeling code. The result shows that the observed rate of period change is fully consistent with period change caused by evolutionary behavior predicted by standard theoretical models.« less
Curved Thermopiezoelectric Shell Structures Modeled by Finite Element Analysis
NASA Technical Reports Server (NTRS)
Lee, Ho-Jun
2000-01-01
"Smart" structures composed of piezoelectric materials may significantly improve the performance of aeropropulsion systems through a variety of vibration, noise, and shape-control applications. The development of analytical models for piezoelectric smart structures is an ongoing, in-house activity at the NASA Glenn Research Center at Lewis Field focused toward the experimental characterization of these materials. Research efforts have been directed toward developing analytical models that account for the coupled mechanical, electrical, and thermal response of piezoelectric composite materials. Current work revolves around implementing thermal effects into a curvilinear-shell finite element code. This enhances capabilities to analyze curved structures and to account for coupling effects arising from thermal effects and the curved geometry. The current analytical model implements a unique mixed multi-field laminate theory to improve computational efficiency without sacrificing accuracy. The mechanics can model both the sensory and active behavior of piezoelectric composite shell structures. Finite element equations are being implemented for an eight-node curvilinear shell element, and numerical studies are being conducted to demonstrate capabilities to model the response of curved piezoelectric composite structures (see the figure).
Spherical nanoindentation stress-strain analysis, Version 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Jordan S.; Turner, David; Miller, Calvin
Nanoindentation is a tool that allows the mechanical response of a variety of materials at the nano to micron length scale to be measured. Recent advances in spherical nanoindentation techniques have allowed for a more reliable and meaningful characterization of the mechanical response from nanoindentation experiments in the form on an indentation stress-strain curve. This code base, Spin, is written in MATLAB (The Mathworks, Inc.) and based on the analysis protocols developed by S.R. Kalidindi and S. Pathak [1, 2]. The inputs include the displacement, load, harmonic contact stiffness, harmonic displacement, and harmonic load from spherical nanoindentation tests in themore » form of an Excel (Microsoft) spreadsheet. The outputs include indentation stress-strain curves and indentation properties as well their variance due to the uncertainty of the zero-point correction in the form of MATLAB data (.mat) and figures (.png). [1] S. Pathak, S.R. Kalidindi. Spherical nanoindentation stress–strain curves, Mater. Sci. Eng R-Rep 91 (2015). [2] S.R. Kalidindi, S. Pathak. Determination of the effective zero-point and the extraction of spherical nanoindentation stress-strain curves, Acta Materialia 56 (2008) 3523-3532.« less
NASA Astrophysics Data System (ADS)
Salim, Samir; Boquien, Médéric; Lee, Janice C.
2018-05-01
We study the dust attenuation curves of 230,000 individual galaxies in the local universe, ranging from quiescent to intensely star-forming systems, using GALEX, SDSS, and WISE photometry calibrated on the Herschel ATLAS. We use a new method of constraining SED fits with infrared luminosity (SED+LIR fitting), and parameterized attenuation curves determined with the CIGALE SED-fitting code. Attenuation curve slopes and UV bump strengths are reasonably well constrained independently from one another. We find that {A}λ /{A}V attenuation curves exhibit a very wide range of slopes that are on average as steep as the curve slope of the Small Magellanic Cloud (SMC). The slope is a strong function of optical opacity. Opaque galaxies have shallower curves—in agreement with recent radiative transfer models. The dependence of slopes on the opacity produces an apparent dependence on stellar mass: more massive galaxies have shallower slopes. Attenuation curves exhibit a wide range of UV bump amplitudes, from none to Milky Way (MW)-like, with an average strength one-third that of the MW bump. Notably, local analogs of high-redshift galaxies have an average curve that is somewhat steeper than the SMC curve, with a modest UV bump that can be, to first order, ignored, as its effect on the near-UV magnitude is 0.1 mag. Neither the slopes nor the strengths of the UV bump depend on gas-phase metallicity. Functional forms for attenuation laws are presented for normal star-forming galaxies, high-z analogs, and quiescent galaxies. We release the catalog of associated star formation rates and stellar masses (GALEX–SDSS–WISE Legacy Catalog 2).
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Optimum Design of Aerospace Structural Components Using Neural Networks
NASA Technical Reports Server (NTRS)
Berke, L.; Patnaik, S. N.; Murthy, P. L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
Creation and utilization of a World Wide Web based space radiation effects code: SIREST
NASA Technical Reports Server (NTRS)
Singleterry, R. C. Jr; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.; Thibeault, S. A.; Noor, A. K.; Cucinotta, F. A.; Badavi, F. F.; Chang, C. K.; Qualls, G. D.;
2001-01-01
In order for humans and electronics to fully and safely operate in the space environment, codes like HZETRN (High Charge and Energy Transport) must be included in any designer's toolbox for design evaluation with respect to radiation damage. Currently, spacecraft designers do not have easy access to accurate radiation codes like HZETRN to evaluate their design for radiation effects on humans and electronics. Today, the World Wide Web is sophisticated enough to support the entire HZETRN code and all of the associated pre and post processing tools. This package is called SIREST (Space Ionizing Radiation Effects and Shielding Tools). There are many advantages to SIREST. The most important advantage is the instant update capability of the web. Another major advantage is the modularity that the web imposes on the code. Right now, the major disadvantage of SIREST will be its modularity inside the designer's system. This mostly comes from the fact that a consistent interface between the designer and the computer system to evaluate the design is incomplete. This, however, is to be solved in the Intelligent Synthesis Environment (ISE) program currently being funded by NASA.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallay, D.; Wiehagen, J.
2014-07-01
Winchester/Camberley Homes collaborated with the Building America team Partnership for Home Innovation to develop a new set of high performance home designs that could be applicable on a production scale. The new home designs are to be constructed in the mixed humid climate zone and could eventually apply to all of the builder's home designs to meet or exceed future energy codes or performance-based programs. However, the builder recognized that the combination of new wall framing designs and materials, higher levels of insulation in the wall cavity, and more detailed air sealing to achieve lower infiltration rates changes the moisturemore » characteristics of the wall system. In order to ensure long term durability and repeatable successful implementation with few call-backs, the project team demonstrated through measured data that the wall system functions as a dynamic system, responding to changing interior and outdoor environmental conditions within recognized limits of the materials that make up the wall system. A similar investigation was made with respect to the complete redesign of the HVAC systems to significantly improve efficiency while maintaining indoor comfort. Recognizing the need to demonstrate the benefits of these efficiency features, the builder offered a new house model to serve as a test case to develop framing designs, evaluate material selections and installation requirements, changes to work scopes and contractor learning curves, as well as to compare theoretical performance characteristics with measured results.« less
49 CFR 1248.100 - Commodity classification designated.
Code of Federal Regulations, 2010 CFR
2010-10-01
... STATISTICS Commodity Code § 1248.100 Commodity classification designated. Commencing with reports for the..., reports of commodity statistics required to be made to the Board, shall be based on the commodity codes... Statistics, 1963, issued by the Bureau of the Budget, and on additional codes 411 through 462 shown in § 1248...
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1991-01-01
The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.
HERCULES: A Pattern Driven Code Transformation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing
2012-01-01
New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-01-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Astrophysics Data System (ADS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-03-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2013-01-01
Nondimensional linear-bifurcation buckling equations for balanced, symmetrically laminated cylinders with negligible shell-wall anisotropies and subjected to uniform axial compression loads are presented. These equations are solved exactly for the practical case of simply supported ends. Nondimensional quantities are used to characterize the buckling behavior that consist of a stiffness-weighted length-to-radius parameter, a stiffness-weighted shell-thinness parameter, a shell-wall nonhomogeneity parameter, two orthotropy parameters, and a nondimensional buckling load. Ranges for the nondimensional parameters are established that encompass a wide range of laminated-wall constructions and numerous generic plots of nondimensional buckling load versus a stiffness-weighted length-to-radius ratio are presented for various combinations of the other parameters. These plots are expected to include many practical cases of interest to designers. Additionally, these plots show how the parameter values affect the distribution and size of the festoons forming each response curve and how they affect the attenuation of each response curve to the corresponding solution for an infinitely long cylinder. To aid in preliminary design studies, approximate formulas for the nondimensional buckling load are derived, and validated against the corresponding exact solution, that give the attenuated buckling response of an infinitely long cylinder in terms of the nondimensional parameters presented herein. A relatively small number of "master curves" are identified that give a nondimensional measure of the buckling load of an infinitely long cylinder as a function of the orthotropy and wall inhomogeneity parameters. These curves reduce greatly the complexity of the design-variable space as compared to representations that use dimensional quantities as design variables. As a result of their inherent simplicity, these master curves are anticipated to be useful in the ongoing development of buckling-design technology.
Turbine blade profile design method based on Bezier curves
NASA Astrophysics Data System (ADS)
Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.
2017-11-01
In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.
Variation of the period and light curves of the solar-type contact binary EQ Tauri
NASA Astrophysics Data System (ADS)
Yuan, Jinzhao; Qian, Shengbang
2007-10-01
We present two new sets of complete light curves of EQ Tauri (EQ Tau) observed in 2000 October and 2004 December. These were analysed, together with the light curves obtained by Yang & Liu in 2001 December, with the 2003 version of the Wilson-Devinney code. In the three observing seasons, the light curves show a noticeable variation in the time-scale of years. The more massive component of EQ Tau is a solar-type star (G2) with a very deep convective envelope, which rotates about 80 times as fast as the Sun. Therefore, the change can be explained by dark-spot activity on the common convective envelope. The assumed unperturbed part of the light curve and the radial velocities published by Rucinski et al. were used to determine the basic parameters of the system, which were kept fixed for spot modelling in the three sets of light curves. The results reveal that the total spotted area on the more massive component covers 18, 3 and 20 per cent of the photospheric surface in the three observing seasons, respectively. Polar spots and high-latitude spots are found. The analysis of the orbital period has demonstrated that it undergoes cyclical oscillation, which is due to either a tertiary component or periodic magnetic activity in the more massive component.
NASA Astrophysics Data System (ADS)
Demissie, Y. K.; Mortuza, M. R.; Li, H. Y.
2015-12-01
The observed and anticipated increasing trends in extreme storm magnitude and frequency, as well as the associated flooding risk in the Pacific Northwest highlighted the need for revising and updating the local intensity-duration-frequency (IDF) curves, which are commonly used for designing critical water infrastructure. In Washington State, much of the drainage system installed in the last several decades uses IDF curves that are outdated by as much as half a century, making the system inadequate and vulnerable for flooding as seen more frequently in recent years. In this study, we have developed new and forward looking rainfall and runoff IDF curves for each county in Washington State using recently observed and projected precipitation data. Regional frequency analysis coupled with Bayesian uncertainty quantification and model averaging methods were used to developed and update the rainfall IDF curves, which were then used in watershed and snow models to develop the runoff IDF curves that explicitly account for effects of snow and drainage characteristic into the IDF curves and related designs. The resulted rainfall and runoff IDF curves provide more reliable, forward looking, and spatially resolved characteristics of storm events that can assist local decision makers and engineers to thoroughly review and/or update the current design standards for urban and rural storm water management infrastructure in order to reduce the potential ramifications of increasing severe storms and resulting floods on existing and planned storm drainage and flood management systems in the state.
NASA Astrophysics Data System (ADS)
Ji, Kun; Ren, Yefei; Wen, Ruizhi
2017-10-01
Reliable site classification of the stations of the China National Strong Motion Observation Network System (NSMONS) has not yet been assigned because of lacking borehole data. This study used an empirical horizontal-to-vertical (H/V) spectral ratio (hereafter, HVSR) site classification method to overcome this problem. First, according to their borehole data, stations selected from KiK-net in Japan were individually assigned a site class (CL-I, CL-II, or CL-III), which is defined in the Chinese seismic code. Then, the mean HVSR curve for each site class was computed using strong motion recordings captured during the period 1996-2012. These curves were compared with those proposed by Zhao et al. (2006a) for four types of site classes (SC-I, SC-II, SC-III, and SC-IV) defined in the Japanese seismic code (JRA, 1980). It was found that an approximate range of the predominant period Tg could be identified by the predominant peak of the HVSR curve for the CL-I and SC-I sites, CL-II and SC-II sites, and CL-III and SC-III + SC-IV sites. Second, an empirical site classification method was proposed based on comprehensive consideration of peak period, amplitude, and shape of the HVSR curve. The selected stations from KiK-net were classified using the proposed method. The results showed that the success rates of the proposed method in identifying CL-I, CL-II, and CL-III sites were 63%, 64%, and 58% respectively. Finally, the HVSRs of 178 NSMONS stations were computed based on recordings from 2007 to 2015 and the sites classified using the proposed method. The mean HVSR curves were re-calculated for three site classes and compared with those from KiK-net data. It was found that both the peak period and the amplitude were similar for the mean HVSR curves derived from NSMONS classification results and KiK-net borehole data, implying the effectiveness of the proposed method in identifying different site classes. The classification results have good agreement with site classes based on borehole data of 81 stations in China, which indicates that our site classification results are acceptable and that the proposed method is practicable.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Design curves for circular and annular duct silencers
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Ramakrishnan, R.
1989-01-01
Conventional models of sound propagation between porous walls (Scott, 1946) are adapted in order to calculate design curves for the lined circular and annular-duct silencers used in HVAC systems. The derivation of the governing equations is outlined, and results for two typical cases are presented graphically. Good agreement with published experimental data is demonstrated.
Development of a Probabilistic Tsunami Hazard Analysis in Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toshiaki Sakai; Tomoyoshi Takeda; Hiroshi Soraoka
2006-07-01
It is meaningful for tsunami assessment to evaluate phenomena beyond the design basis as well as seismic design. Because once we set the design basis tsunami height, we still have possibilities tsunami height may exceeds the determined design tsunami height due to uncertainties regarding the tsunami phenomena. Probabilistic tsunami risk assessment consists of estimating for tsunami hazard and fragility of structures and executing system analysis. In this report, we apply a method for probabilistic tsunami hazard analysis (PTHA). We introduce a logic tree approach to estimate tsunami hazard curves (relationships between tsunami height and probability of excess) and present anmore » example for Japan. Examples of tsunami hazard curves are illustrated, and uncertainty in the tsunami hazard is displayed by 5-, 16-, 50-, 84- and 95-percentile and mean hazard curves. The result of PTHA will be used for quantitative assessment of the tsunami risk for important facilities located on coastal area. Tsunami hazard curves are the reasonable input data for structures and system analysis. However the evaluation method for estimating fragility of structures and the procedure of system analysis is now being developed. (authors)« less
78 FR 37885 - Approval of American Society of Mechanical Engineers' Code Cases
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-24
...), standard design certifications, standard design approvals and manufacturing licenses, to use the Code Cases... by the ASME. The three RGs that would be incorporated by reference are RG 1.84, ``Design, Fabrication... nuclear power plant licensees, and applicants for CPs, OLs, COLs, standard design certifications, standard...
A Case Study of Reverse Engineering Integrated in an Automated Design Process
NASA Astrophysics Data System (ADS)
Pescaru, R.; Kyratsis, P.; Oancea, G.
2016-11-01
This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored atmore » synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.
NASA Technical Reports Server (NTRS)
Steinke, Ronald J.
1989-01-01
The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Howard, Kipp E.
1991-01-01
A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.
CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.
2013-12-01
As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.
Optimized mid-infrared thermal emitters for applications in aircraft countermeasures
NASA Astrophysics Data System (ADS)
Lorenzo, Simón G.; You, Chenglong; Granier, Christopher H.; Veronis, Georgios; Dowling, Jonathan P.
2017-12-01
We introduce an optimized aperiodic multilayer structure capable of broad angle and high temperature thermal emission over the 3 μm to 5 μm atmospheric transmission band. This aperiodic multilayer structure composed of alternating layers of silicon carbide and graphite on top of a tungsten substrate exhibits near maximal emittance in a 2 μm wavelength range centered in the mid-wavelength infrared band traditionally utilized for atmospheric transmission. We optimize the layer thicknesses using a hybrid optimization algorithm coupled to a transfer matrix code to maximize the power emitted in this mid-infrared range normal to the structure's surface. We investigate possible applications for these structures in mimicking 800-1000 K aircraft engine thermal emission signatures and in improving countermeasure effectiveness against hyperspectral imagers. We find these structures capable of matching the Planck blackbody curve in the selected infrared range with relatively sharp cutoffs on either side, leading to increased overall efficiency of the structures. Appropriately optimized multilayer structures with this design could lead to matching a variety of mid-infrared thermal emissions. For aircraft countermeasure applications, this method could yield a flare design capable of mimicking engine spectra and breaking the lock of hyperspectral imaging systems.
A new supernova light curve modeling program
NASA Astrophysics Data System (ADS)
Jäger, Zoltán; Nagy, Andrea P.; Biro, Barna I.; Vinkó, József
2017-12-01
Supernovae are extremely energetic explosions that highlight the violent deaths of various types of stars. Studying such cosmic explosions may be important because of several reasons. Supernovae play a key role in cosmic nucleosynthesis processes, and they are also the anchors of methods of measuring extragalactic distances. Several exotic physical processes take place in the expanding ejecta produced by the explosion. We have developed a fast and simple semi-analytical code to model the the light curve of core collapse supernovae. This allows the determination of their most important basic physical parameters, like the the radius of the progenitor star, the mass of the ejected envelope, the mass of the radioactive nickel synthesized during the explosion, among others.
NASA Astrophysics Data System (ADS)
Croce, Olivier; Hachem, Sabet; Franchisseur, Eric; Marcié, Serge; Gérard, Jean-Pierre; Bordy, Jean-Marc
2012-06-01
This paper presents a dosimetric study concerning the system named "Papillon 50" used in the department of radiotherapy of the Centre Antoine-Lacassagne, Nice, France. The machine provides a 50 kVp X-ray beam, currently used to treat rectal cancers. The system can be mounted with various applicators of different diameters or shapes. These applicators can be fixed over the main rod tube of the unit in order to deliver the prescribed absorbed dose into the tumor with an optimal distribution. We have analyzed depth dose curves and dose profiles for the naked tube and for a set of three applicators. Dose measurements were made with an ionization chamber (PTW type 23342) and Gafchromic films (EBT2). We have also compared the measurements with simulations performed using the Monte Carlo code PENELOPE. Simulations were performed with a detailed geometrical description of the experimental setup and with enough statistics. Results of simulations are made in accordance with experimental measurements and provide an accurate evaluation of the dose delivered. The depths of the 50% isodose in water for the various applicators are 4.0, 6.0, 6.6 and 7.1 mm. The Monte Carlo PENELOPE simulations are in accordance with the measurements for a 50 kV X-ray system. Simulations are able to confirm the measurements provided by Gafchromic films or ionization chambers. Results also demonstrate that Monte Carlo simulations could be helpful to validate the future applicators designed for other localizations such as breast or skin cancers. Furthermore, Monte Carlo simulations could be a reliable alternative for a rapid evaluation of the dose delivered by such a system that uses multiple designs of applicators.
A Comparison of Fatigue Design Methods
2001-04-05
Boiler and Pressure Vessel Code does not...Engineers, "ASME Boiler and Pressure Vessel Code ," ASME, 3 Park Ave., New York, NY 10016-5990. [4] Langer, B. F., "Design of Pressure Vessels Involving... and Pressure Vessel Code [3] presents these methods and has expanded the procedures to other pressure vessels besides nuclear pressure vessels. B.
41 CFR 102-76.10 - What basic design and construction policy governs Federal agencies?
Code of Federal Regulations, 2014 CFR
2014-01-01
.... (c) Follow nationally recognized model building codes and other applicable nationally recognized codes that govern Federal construction to the maximum extent feasible and consider local building code requirements. (See 40 U.S.C. 3310 and 3312.) (d) Design Federal buildings to have a long life expectancy and...
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
2016-03-01
design . ERDC/CHL CHETN-X-2. Vicksburg, MS: U.S. Army Engineer Research and Development Center. http://chl.erdc.usace.army. mil/chetn REFERENCES...Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, edited by S. Solomon , D. Qin, M. Manning, Z. Chen, M...Duration- Frequency Curves for Infrastructure Design by Brian E. Skahill, Amir AghaKouchak, Linyin Cheng, Aaron Byrd, and Joseph Kanney
Effects of experimental design on calibration curve precision in routine analysis
Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816
Channel coding in the space station data system network
NASA Technical Reports Server (NTRS)
Healy, T.
1982-01-01
A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.
Broadband transmission-type coding metamaterial for wavefront manipulation for airborne sound
NASA Astrophysics Data System (ADS)
Li, Kun; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-chun
2018-07-01
The recent advent of coding metamaterials, as a new class of acoustic metamaterials, substantially reduces the complexity in the design and fabrication of acoustic functional devices capable of manipulating sound waves in exotic manners by arranging coding elements with discrete phase states in specific sequences. It is therefore intriguing, both physically and practically, to pursue a mechanism for realizing broadband acoustic coding metamaterials that control transmitted waves with a fine resolution of the phase profile. Here, we propose the design of a transmission-type acoustic coding device and demonstrate its metamaterial-based implementation. The mechanism is that, instead of relying on resonant coding elements that are necessarily narrow-band, we build weak-resonant coding elements with a helical-like metamaterial with a continuously varying pitch that effectively expands the working bandwidth while maintaining the sub-wavelength resolution of the phase profile that is vital for the production of complicated wave fields. The effectiveness of our proposed scheme is numerically verified via the demonstration of three distinctive examples of acoustic focusing, anomalous refraction, and vortex beam generation in the prescribed frequency band on the basis of 1- and 2-bit coding sequences. Simulation results agree well with theoretical predictions, showing that the designed coding devices with discrete phase profiles are efficient in engineering the wavefront of outcoming waves to form the desired spatial pattern. We anticipate the realization of coding metamaterials with broadband functionality and design flexibility to open up possibilities for novel acoustic functional devices for the special manipulation of transmitted waves and underpin diverse applications ranging from medical ultrasound imaging to acoustic detections.
Morphometric Analysis of the Clavicles in Chinese Population
Yang, Jesse Chieh-Szu; Lin, Kang-Ping
2017-01-01
The clavicle has a complex geometry that makes plate fixation technically difficult. The current study aims to measure the anatomical parameters of Chinese clavicles as reference for plate design. One hundred clavicles were analyzed. The clavicle bone model was reconstructed by using computed tomography images. The length, diameters, and curvatures of the clavicle were then measured. The female clavicle was shorter, more slender, and less curved in lateral part than the male clavicle. There was a positive relationship between height and clavicle parameters except lateral curve and depth. The measurements of Chinese clavicles were generally smaller than Caucasians. The clavicle curves were correlated with the bone length; thus consideration of the curve variations may be necessary as designing size distribution of clavicle plate. PMID:28497066
Betz, J.W.; Blanco, M.A.; Cahn, C.R.; Dafesh, P.A.; Hegarty, C.J.; Hudnut, K.W.; Kasemsri, V.; Keegan, R.; Kovach, K.; Lenahan, L.S.; Ma, H.H.; Rushanan, J.J.; Sklar, D.; Stansell, T.A.; Wang, C.C.; Yi, S.K.
2006-01-01
Detailed design of the modernized LI civil signal (L1C) signal has been completed, and the resulting draft Interface Specification IS-GPS-800 was released in Spring 2006. The novel characteristics of the optimized L1C signal design provide advanced capabilities while offering to receiver designers considerable flexibility in how to use these capabilities. L1C provides a number of advanced features, including: 75% of power in a pilot component for enhanced signal tracking, advanced Weilbased spreading codes, an overlay code on the pilot that provides data message synchronization, support for improved reading of clock and ephemeris by combining message symbols across messages, advanced forward error control coding, and data symbol interleaving to combat fading. The resulting design offers receiver designers the opportunity to obtain unmatched performance in many ways. This paper describes the design of L1C. A summary of LIC's background and history is provided. The signal description then proceeds with the overall signal structure consisting of a pilot component and a carrier component. The new L1C spreading code family is described, along with the logic used for generating these spreading codes. Overlay codes on the pilot channel are also described, as is the logic used for generating the overlay codes. Spreading modulation characteristics are summarized. The data message structure is also presented, showing the format for providing time, ephemeris, and system data to users, along with features that enable receivers to perform code combining. Encoding of rapidly changing time bits is described, as are the Low Density Parity Check codes used for forward error control of slowly changing time bits, clock, ephemeris, and system data. The structure of the interleaver is also presented. A summary of L 1C's unique features and their benefits is provided, along with a discussion of the plan for L1C implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoekstra, Robert J.; Hammond, Simon David; Richards, David
2017-09-01
This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.