Science.gov

Sample records for equipartitioned design method

  1. A Design Study of Co-Splitting as Situated in the Equipartitioning Learning Trajectory

    ERIC Educational Resources Information Center

    Corley, Andrew Kent

    2013-01-01

    The equipartitioning learning trajectory (Confrey, Maloney, Nguyen, Mojica & Myers, 2009) has been hypothesized and the proficiency levels have been validated through much prior work. This study solidifies understanding of the upper level of co-splitting, which has been redefined through further clinical interview work (Corley, Confrey &…

  2. The two Faces of Equipartition

    NASA Astrophysics Data System (ADS)

    Sanchez-Sesma, F. J.; Perton, M.; Rodriguez-Castellanos, A.; Campillo, M.; Weaver, R. L.; Rodriguez, M.; Prieto, G.; Luzon, F.; McGarr, A.

    2008-12-01

    Equipartition is good. Beyond its philosophical implications, in many instances of statistical physics it implies that the available kinetic and potential elastic energy, in phase space, is distributed in the same fixed proportions among the possible "states". There are at least two distinct and complementary descriptions of such states in a diffuse elastic wave field u(r,t). One asserts that u may be represented as an incoherent isotropic superposition of incident plane waves of different polarizations. Each type of wave has an appropriate share of the available energy. This definition introduced by Weaver is similar to the room acoustics notion of a diffuse field, and it suffices to permit prediction of field correlations. The other description assumes that the degrees of freedom of the system, in this case, the kinetic energy densities, are all incoherently excited with equal expected amplitude. This definition, introduced by Maxwell, is also familiar from room acoustics using the normal modes of vibration within an arbitrarily large body. Usually, to establish if an elastic field is diffuse and equipartitioned only the first description has been applied, which requires the separation of dilatational and shear waves using carefully designed experiments. When the medium is bounded by an interface, waves of other modes, for example Rayleigh waves, complicate the measurement of these energies. As a consequence, it can be advantageous to use the second description. Moreover, each spatial component of the energy densities is linked, when an elastic field is diffuse and equipartitioned, to the component of the imaginary part of the Green function at the source. Accordingly, one can use the second description to retrieve the Green function and obtain more information about the medium. The equivalence between the two descriptions of equipartition are given for an infinite space and extended to the case of a half-space. These two descriptiosn are equivalent thanks to the

  3. A Design Research Study of a Curriculum and Diagnostic Assessment System for a Learning Trajectory on Equipartitioning

    ERIC Educational Resources Information Center

    Confrey, Jere; Maloney, Alan

    2015-01-01

    Design research studies provide significant opportunities to study new innovations and approaches and how they affect the forms of learning in complex classroom ecologies. This paper reports on a two-week long design research study with twelve 2nd through 4th graders using curricular materials and a tablet-based diagnostic assessment system, both…

  4. Observation of equipartition of seismic waves.

    PubMed

    Hennino, R; Trégourès, N; Shapiro, N M; Margerin, L; Campillo, M; van Tiggelen, B A; Weaver, R L

    2001-04-01

    Equipartition is a first principle in wave transport, based on the tendency of multiple scattering to homogenize phase space. We report observations of this principle for seismic waves created by earthquakes in Mexico. We find qualitative agreement with an equipartition model that accounts for mode conversions at the Earth's surface. PMID:11327992

  5. Green's function calculation from equipartition theorem.

    PubMed

    Perton, Mathieu; Sánchez-Sesma, Francisco José

    2016-08-01

    A method is presented to calculate the elastodynamic Green's functions by using the equipartition principle. The imaginary parts are calculated as the average cross correlations of the displacement fields generated by the incidence of body and surface waves with amplitudes weighted by partition factors. The real part is retrieved using the Hilbert transform. The calculation of the partition factors is discussed for several geometrical configurations in two dimensional space: the full-space, a basin in a half-space and for layered media. For the last case, it results in a fast computation of the full Green's functions. Additionally, if the contribution of only selected states is desired, as for instance the surface wave part, the computation is even faster. Its use for full waveform inversion may then be advantageous. PMID:27586757

  6. The modified equipartition calculation for supernova remnants with the spectral index α = 0.5

    NASA Astrophysics Data System (ADS)

    Urošević, Dejan; Pavlović, Marko Z.; Arbutina, Bojan; Dobardžić, Aleksandra

    2015-03-01

    Recently, the modified equipartition calculation for supernova remnants (SNRs) has been derived by Arbutina et al. (2012). Their formulae can be used for SNRs with the spectral indices between 0.5 < α < 1. Here, by using approximately the same analytical method, we derive the equipartition formulae useful for SNRs with spectral index α=0.5. These formulae represent next step upgrade of Arbutina et al. (2012) derivation, because among 30 Galactic SNRs with available observational parameters for the equipartition calculation, 16 have spectral index α = 0.5. For these 16 Galactic SNRs we calculated the magnetic field strengths which are approximately 40 per cent higher than those calculated by using Pacholczyk (1970) equipartition and similar to those calculated by using Beck & Krause (2005) calculation.

  7. MODIFIED EQUIPARTITION CALCULATION FOR SUPERNOVA REMNANTS

    SciTech Connect

    Arbutina, B.; Urosevic, D.; Andjelic, M. M.; Pavlovic, M. Z.; Vukotic, B.

    2012-02-10

    Determination of the magnetic field strength in the interstellar medium is one of the more complex tasks of contemporary astrophysics. We can only estimate the order of magnitude of the magnetic field strength by using a few very limited methods. Besides the Zeeman effect and Faraday rotation, the equipartition or minimum-energy calculation is a widespread method for estimating magnetic field strength and energy contained in the magnetic field and cosmic-ray particles by using only the radio synchrotron emission. Despite its approximate character, it remains a useful tool, especially when there are no other data about the magnetic field in a source. In this paper, we give a modified calculation that we think is more appropriate for estimating magnetic field strengths and energetics in supernova remnants (SNRs). We present calculated estimates of the magnetic field strengths for all Galactic SNRs for which the necessary observational data are available. The Web application for calculation of the magnetic field strengths of SNRs is available at http://poincare.matf.bg.ac.rs/{approx}arbo/eqp/.

  8. No energy equipartition in globular clusters

    NASA Astrophysics Data System (ADS)

    Trenti, Michele; van der Marel, Roeland

    2013-11-01

    It is widely believed that globular clusters evolve over many two-body relaxation times towards a state of energy equipartition, so that velocity dispersion scales with stellar mass as σ ∝ m-η with η = 0.5. We show here that this is incorrect, using a suite of direct N-body simulations with a variety of realistic initial mass functions and initial conditions. No simulated system ever reaches a state close to equipartition. Near the centre, the luminous main-sequence stars reach a maximum ηmax ≈ 0.15 ± 0.03. At large times, all radial bins convergence on an asymptotic value η∞ ≈ 0.08 ± 0.02. The development of this `partial equipartition' is strikingly similar across our simulations, despite the range of different initial conditions employed. Compact remnants tend to have higher η than main-sequence stars (but still η < 0.5), due to their steeper (evolved) mass function. The presence of an intermediate-mass black hole (IMBH) decreases η, consistent with our previous findings of a quenching of mass segregation under these conditions. All these results can be understood as a consequence of the Spitzer instability for two-component systems, extended by Vishniac to a continuous mass spectrum. Mass segregation (the tendency of heavier stars to sink towards the core) has often been studied observationally, but energy equipartition has not. Due to the advent of high-quality proper motion data sets from the Hubble Space Telescope, it is now possible to measure η for real clusters. Detailed data-model comparisons open up a new observational window on globular cluster dynamics and evolution. A first comparison of our simulations to observations of Omega Cen yields good agreement, supporting the view that globular clusters are not generally in energy equipartition. Modelling techniques that assume equipartition by construction (e.g. multi-mass Michie-King models) are approximate at best.

  9. Lack of Energy Equipartition in Globular Clusters

    NASA Astrophysics Data System (ADS)

    Trenti, Michele

    2013-05-01

    Abstract (2,250 Maximum Characters): It is widely believed that globular clusters evolve over many two-body relaxation times toward a state of energy equipartition, so that velocity dispersion scales with stellar mass as σ∝m^{-η} with η=0.5. I will show instead that this is incorrect, using a suite of direct N-body simulations with a variety of realistic initial mass functions and initial conditions. No simulated system ever reaches a state close to equipartition. Near the center, the luminous main-sequence stars reach a maximum η_{max 0.15±0.03. At large times, all radial bins convergence on an asymptotic value η_{∞ 0.08±0.02. The development of this ``partial equipartition'' is strikingly similar across simulations, despite the range of different initial conditions employed. Compact remnants tend to have higher η than main-sequence stars (but still η< 0.5), due to their steeper (evolved) mass function. The presence of an intermediate-mass black hole (IMBH) decreases η, consistent with our previous findings of a quenching of mass segregation under these conditions. All these results can be understood as a consequence of the Spitzer instability for two-component systems, extended by Vishniac to a continuous mass spectrum. Mass segregation (the tendency of heavier stars to sink toward the core) has often been studied observationally, but energy equipartition has not. Due to the advent of high-quality proper motion datasets from the Hubble Space Telescope, it is now possible to measure η for real clusters. Detailed data-model comparisons open up a new observational window on globular cluster dynamics, structure, evolution, initial conditions, and possible IMBHs. A first comparison of my simulations to observations of Omega Cen yields good agreement, supporting the view that globular clusters are not generally in energy equipartition. Modeling techniques that assume equipartition by construction (e.g., multi-mass Michie-King models) are thus approximate

  10. The Impact of Non-equipartition on Cosmological Parameter Estimation from Sunyaev-Zel'dovich Surveys

    NASA Astrophysics Data System (ADS)

    Wong, Ka-Wah; Sarazin, Craig L.; Wik, Daniel R.

    2010-08-01

    The collisionless accretion shock at the outer boundary of a galaxy cluster should primarily heat the ions instead of electrons since they carry most of the kinetic energy of the infalling gas. Near the accretion shock, the density of the intracluster medium is very low and the Coulomb collisional timescale is longer than the accretion timescale. Electrons and ions may not achieve equipartition in these regions. Numerical simulations have shown that the Sunyaev-Zel'dovich observables (e.g., the integrated Comptonization parameter Y) for relaxed clusters can be biased by a few percent. The Y versus mass relation can be biased if non-equipartition effects are not properly taken into account. Using a set of hydrodynamical simulations we have developed, we have calculated three potential systematic biases in the Y versus mass relations introduced by non-equipartition effects during the cross-calibration or self-calibration when using the galaxy cluster abundance technique to constraint cosmological parameters. We then use a semi-analytic technique to estimate the non-equipartition effects on the distribution functions of Y (Y functions) determined from the extended Press-Schechter theory. Depending on the calibration method, we find that non-equipartition effects can induce systematic biases on the Y functions, and the values of the cosmological parameters Ω8, σ8, and the dark energy equation of state parameter w can be biased by a few percent. In particular, non-equipartition effects can introduce an apparent evolution in w of a few percent in all of the systematic cases we considered. Techniques are suggested to take into account the non-equipartition effect empirically when using the cluster abundance technique to study precision cosmology. We conclude that systematic uncertainties in the Y versus mass relation of even a few percent can introduce a comparable level of biases in cosmological parameter measurements.

  11. Turbulent equipartitions in two dimensional drift convection

    SciTech Connect

    Isichenko, M.B.; Yankov, V.V.

    1995-07-25

    Unlike the thermodynamic equipartition of energy in conservative systems, turbulent equipartitions (TEP) describe strongly non-equilibrium systems such as turbulent plasmas. In turbulent systems, energy is no longer a good invariant, but one can utilize the conservation of other quantities, such as adiabatic invariants, frozen-in magnetic flux, entropy, or combination thereof, in order to derive new, turbulent quasi-equilibria. These TEP equilibria assume various forms, but in general they sustain spatially inhomogeneous distributions of the usual thermodynamic quantities such as density or temperature. This mechanism explains the effects of particle and energy pinch in tokamaks. The analysis of the relaxed states caused by turbulent mixing is based on the existence of Lagrangian invariants (quantities constant along fluid-particle or other orbits). A turbulent equipartition corresponds to the spatially uniform distribution of relevant Lagrangian invariants. The existence of such turbulent equilibria is demonstrated in the simple model of two dimensional electrostatically turbulent plasma in an inhomogeneous magnetic field. The turbulence is prescribed, and the turbulent transport is assumed to be much stronger than the classical collisional transport. The simplicity of the model makes it possible to derive the equations describing the relaxation to the TEP state in several limits.

  12. Turbulent Equipartition Theory of Toroidal Momentum Pinch

    SciTech Connect

    T.S. Hahm, P.H. Diamond, O.D. Gurcan, and G. Rewaldt

    2008-01-31

    The mode-independet part of magnetic curvature driven turbulent convective (TuroCo) pinch of the angular momentum density [Hahm et al., Phys. Plasmas 14,072302 (2007)] which was originally derived from the gyrokinetic equation, can be interpreted in terms of the turbulent equipartition (TEP) theory. It is shown that the previous results can be obtained from the local conservation of "magnetically weighted angular momentum density," nmi U|| R/B2, and its homogenization due to turbulent flows. It is also demonstrated that the magnetic curvature modification of the parallel acceleration in the nonlinear gyrokinetic equation in the laboratory frame, which was shown to be responsible for the TEP part of the TurCo pinch of angular momentum density in the previous work, is closely related to the Coriolis drift coupling to the perturbed electric field. In addition, the origin of the diffusive flux in the rotating frame is highlighted. Finally, it is illustratd that there should be a difference in scalings between the momentum pinch originated from inherently toroidal effects and that coming from other mechanisms which exist in a simpler geometry.

  13. Equipartition theorem in glasses and liquids

    NASA Astrophysics Data System (ADS)

    Levashov, Valentin A.; Egami, Takeshi; Aga, Rachel S.; Morris, James R.

    2008-03-01

    In glasses and liquids phonons have very short life-time, whereas the total potential energy is not linear with temperature, but follows the T**(3/5) law. Thus it may appear that atomic vibrations in liquids cannot be described by the harmonic oscillator model that follows the equipartition theorem for the kinetic energy and potential energy. We show that the description of the nearest neighbor oscillation in terms of the atomic level stresses indeed provide such a description. The model was tested for various pair-wise potentials, including the Lennard-Jones potential, the Johnson potentials, and only the repulsive part of the Johnson potential. In all cases each of the local elastic energies of the six independent components of the stress tensor is equal to kT/4, thus the total potential energy is equal to (3/2)kT. Thus this model provides the basis for discussing the thermodynamic properties of glasses and liquids based on atomistic excitations. An example of this model leading to the description of the glass transition temperature in metallic glasses is discussed [1]. [1] T. Egami, et al., Phys. Rev. B 76, 024203 (2007).

  14. Turbulent equipartition theory of toroidal momentum pinch

    SciTech Connect

    Hahm, T. S.; Rewoldt, G.; Diamond, P. H.; Gurcan, O. D.

    2008-05-15

    The mode-independent part of the magnetic curvature driven turbulent convective (TurCo) pinch of the angular momentum density [Hahm et al., Phys. Plasmas 14, 072302 (2007)], which was originally derived from the gyrokinetic equation, can be interpreted in terms of the turbulent equipartition (TEP) theory. It is shown that the previous results can be obtained from the local conservation of 'magnetically weighted angular momentum density', nm{sub i}U{sub parallel}R/B{sup 2}, and its homogenization due to turbulent flows. It is also demonstrated that the magnetic curvature modification of the parallel acceleration in the nonlinear gyrokinetic equation in the laboratory frame, which was shown to be responsible for the TEP part of the TurCo pinch of angular momentum density in the previous work, is closely related to the Coriolis drift coupling to the perturbed electric field. In addition, the origin of the diffusive flux in the rotating frame is highlighted. Finally, it is illustrated that there should be a difference in scalings between the momentum pinch originated from inherently toroidal effects and that coming from other mechanisms that exist in a simpler geometry.

  15. Pressure-strain energy redistribution in compressible turbulence: return-to-isotropy versus kinetic-potential energy equipartition

    NASA Astrophysics Data System (ADS)

    Lee, Kurnchul; Venugopal, Vishnu; Girimaji, Sharath S.

    2016-08-01

    Return-to-isotropy and kinetic-potential energy equipartition are two fundamental pressure-moderated energy redistributive processes in anisotropic compressible turbulence. Pressure-strain correlation tensor redistributes energy among various Reynolds stress components and pressure-dilatation is responsible for energy reallocation between dilatational kinetic and potential energies. The competition and interplay between these pressure-based processes are investigated in this study. Direct numerical simulations (DNS) of low turbulent Mach number dilatational turbulence are performed employing the hybrid thermal Lattice Boltzman method (HTLBM). It is found that a tendency towards equipartition precedes proclivity for isotropization. An evolution towards equipartition has a collateral but critical effect on return-to-isotropy. The preferential transfer of energy from strong (rather than weak) Reynolds stress components to potential energy accelerates the isotropization of dilatational fluctuations. Understanding of these pressure-based redistributive processes is critical for developing insight into the character of compressible turbulence.

  16. A novel look at energy equipartition in globular clusters

    NASA Astrophysics Data System (ADS)

    Bianchini, P.; van de Ven, G.; Norris, M. A.; Schinnerer, E.; Varri, A. L.

    2016-06-01

    Two-body interactions play a major role in shaping the structural and dynamical properties of globular clusters (GCs) over their long-term evolution. In particular, GCs evolve towards a state of partial energy equipartition that induces a mass dependence in their kinematics. By using a set of Monte Carlo cluster simulations evolved in quasi-isolation, we show that the stellar mass dependence of the velocity dispersion σ(m) can be described by an exponential function σ2 ∝ exp (-m/meq), with the parameter meq quantifying the degree of partial energy equipartition of the systems. This simple parametrization successfully captures the behaviour of the velocity dispersion at lower as well as higher stellar masses, that is, the regime where the system is expected to approach full equipartition. We find a tight correlation between the degree of equipartition reached by a GC and its dynamical state, indicating that clusters that are more than about 20 core relaxation times old, have reached a maximum degree of equipartition. This equipartition-dynamical state relation can be used as a tool to characterize the relaxation condition of a cluster with a kinematic measure of the meq parameter. Vice versa, the mass dependence of the kinematics can be predicted knowing the relaxation time solely on the basis of photometric measurements. Moreover, any deviations from this tight relation could be used as a probe of a peculiar dynamical history of a cluster. Finally, our novel approach is important for the interpretation of state-of-the-art Hubble Space Telescope proper motion data, for which the mass dependence of kinematics can now be measured, and for the application of modelling techniques which take into consideration multimass components and mass segregation.

  17. Comment on Turbulent Equipartition Theory of Toroidal Momentum Pinch

    SciTech Connect

    Hahm, T. S.; Diamond, P. H.; Gurcan, O. D.; Rewoldt, G.

    2009-03-12

    This response demonstrates that the comment by Peeters et al. contains an incorrect and misleading interpretation of our paper [Hahm et al., Phys. Plasmas 15, 055902 (2008)] regarding the density gradient dependence of momentum pinch and the turbulent equipartition (TEP) theory.

  18. Designing ROW Methods

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1996-01-01

    There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.

  19. Thermodynamics of black holes from equipartition of energy and holography

    SciTech Connect

    Tian Yu; Wu Xiaoning

    2010-05-15

    A gravitational potential in the relativistic case is introduced as an alternative to Wald's potential used by Verlinde, which reproduces the familiar entropy/area relation S=A/4 (in the natural units) when Verlinde's idea is applied to the black hole case. Upon using the equipartition rule, the correct form of the Komar mass (energy) can also be obtained, which leads to the Einstein equations. It is explicitly shown that our entropy formula agrees with Verlinde's entropy variation formula in spherical cases. The stationary space-times, especially the Kerr-Newman black hole, are then discussed, where it is shown that the equipartition rule involves the reduced mass, instead of the Arnowitt-Deser-Misner mass, on the horizon of the black hole.

  20. Mass segregation in star clusters is not energy equipartition

    NASA Astrophysics Data System (ADS)

    Parker, Richard J.; Goodwin, Simon P.; Wright, Nicholas J.; Meyer, Michael R.; Quanz, Sascha P.

    2016-04-01

    Mass segregation in star clusters is often thought to indicate the onset of energy equipartition, where the most massive stars impart kinetic energy to the lower-mass stars and brown dwarfs/free floating planets. The predicted net result of this is that the centrally concentrated massive stars should have significantly lower velocities than fast-moving low-mass objects on the periphery of the cluster. We search for energy equipartition in initially spatially and kinematically substructured N-body simulations of star clusters with N = 1500 stars, evolved for 100 Myr. In clusters that show significant mass segregation we find no differences in the proper motions or radial velocities as a function of mass. The kinetic energies of all stars decrease as the clusters relax, but the kinetic energies of the most massive stars do not decrease faster than those of lower-mass stars. These results suggest that dynamical mass segregation - which is observed in many star clusters - is not a signature of energy equipartition from two-body relaxation.

  1. Mass segregation in star clusters is not energy equipartition

    NASA Astrophysics Data System (ADS)

    Parker, Richard J.; Goodwin, Simon P.; Wright, Nicholas J.; Meyer, Michael R.; Quanz, Sascha P.

    2016-06-01

    Mass segregation in star clusters is often thought to indicate the onset of energy equipartition, where the most massive stars impart kinetic energy to the lower-mass stars and brown dwarfs/free-floating planets. The predicted net result of this is that the centrally concentrated massive stars should have significantly lower velocities than fast-moving low-mass objects on the periphery of the cluster. We search for energy equipartition in initially spatially and kinematically substructured N-body simulations of star clusters with N = 1500 stars, evolved for 100 Myr. In clusters that show significant mass segregation we find no differences in the proper motions or radial velocities as a function of mass. The kinetic energies of all stars decrease as the clusters relax, but the kinetic energies of the most massive stars do not decrease faster than those of lower-mass stars. These results suggest that dynamical mass segregation - which is observed in many star clusters - is not a signature of energy equipartition from two-body relaxation.

  2. Relativistic Momentum and Manifestly Covariant Equipartition Theorem Revisited

    SciTech Connect

    Chacon-Acosta, Guillermo; Dagdug, Leonardo; Morales-Tecotl, Hugo A.

    2010-07-12

    Recently the discussion about the right relativistic generalization of thermodynamics has been revived. In particular the case of temperature has been investigated by alluding to a form of relativistic equipartition theorem. Now from the kinetic theory point of view a covariant equipartition involves necessarily the relativistic momentum of the system, which is given by an integral of the energy-momentum tensor over a spacelike hypersurface. Some authors have even proposed to trade the spacelike hypersurfaces entering in there by lightlike ones to accommodate Lorentz covariance. In this work we argue that a well defined momentum for a diluted gas can be given by making use of the velocity of the gas as whole and thereby selecting a hypersurface; this being in direct analogy with the case of an extended classical electron model and which turned out to solve the Abraham-Lorentz controversy codified in the wrong non-relativistic limit. We also discuss the effect of such choices on the equipartition theorem calculated through the covariant form of the Juettner distribution function.

  3. NON-EQUIPARTITION OF ENERGY, MASSES OF NOVA EJECTA, AND TYPE Ia SUPERNOVAE

    SciTech Connect

    Shara, Michael M.; Yaron, Ofer; Prialnik, Dina; Kovetz, Attay

    2010-04-01

    The total masses ejected during classical nova (CN) eruptions are needed to answer two questions with broad astrophysical implications: can accreting white dwarfs be 'pushed over' the Chandrasekhar mass limit to yield type Ia supernovae? Are ultra-luminous red variables a new kind of astrophysical phenomenon, or merely extreme classical novae? We review the methods used to determine nova ejecta masses. Except for the unique case of BT Mon (nova 1939), all nova ejecta mass determinations depend on untested assumptions and multi-parameter modeling. The remarkably simple assumption of equipartition between kinetic and radiated energy (E {sub kin} and E {sub rad}, respectively) in nova ejecta has been invoked as a way around this conundrum for the ultra-luminous red variable in M31. The deduced mass is far larger than that produced by any CN model. Our nova eruption simulations show that radiation and kinetic energy in nova ejecta are very far from being in energy equipartition, with variations of 4 orders of magnitude in the ratio E {sub kin}/E {sub rad} being commonplace. The assumption of equipartition must not be used to deduce nova ejecta masses; any such 'determinations' can be overestimates by a factor of up to 10,000. We data-mined our extensive series of nova simulations to search for correlations that could yield nova ejecta masses. Remarkably, the mass ejected during a nova eruption is dependent only on (and is directly proportional to) E {sub rad}. If we measure the distance to an erupting nova and its bolometric light curve, then E {sub rad} and hence the mass ejected can be directly measured.

  4. Turbulent equipartition and homogenization of plasma angular momentum.

    PubMed

    Gürcan, O D; Diamond, P H; Hahm, T S

    2008-04-01

    A physical model of turbulent equipartition (TEP) of plasma angular momentum is developed. We show that using a simple, model insensitive ansatz of conservation of total angular momentum, a TEP pinch of angular momentum can be obtained. We note that this term corresponds to a part of the pinch velocity previously calculated using quasilinear gyrokinetic theory. We observe that the nondiffusive TEP flux is inward, and therefore may explain the peakedness of the rotation profiles observed in certain experiments. Similar expressions for linear toroidal momentum and flow are computed and it is noted that there is an additional effect due the radial profile of moment of inertia density. PMID:18517961

  5. Treatment of MHD turbulence with non-equipartition and anisotropy

    NASA Astrophysics Data System (ADS)

    Zhou, Ye; Matthaeus, W. H.

    2005-11-01

    Magnetohydrodynamics (MHD) turbulence theory, often employed satisfactorily in astrophysical applications, has often focused on parameter ranges that imply nearly equal values of kinetic and magnetic energies and length scales. However, MHD flow may have disparity magnetic Prandtl number, dissimilar kinetic and magnetic Reynolds number, different kinetic and magnetic outer length scales, and strong anisotropy. Here we discuss a phenomenology for such ``non-equipartitioned'' MHD flow. We suggest two conditions for a MHD flow to transition to strong turbulent flow, extensions of (i) Taylor's constant flux in an inertial range, and (ii) Kolmogorov's scale separation between the large and small scale boundaries of an inertial range. For this analysis, the detailed information on turbulence structure is not needed. These two conditions for MHD transition are expected to provide consistent predictions and should be applicable to anisotropic MHD flows, after the length scales are replaced by their corresponding perpendicular components. Second, we point out that the dynamics and anisotropy of MHD fluctuations is controlled by the relative strength between the straining effects between eddies of similar size and the sweeping action by the large-eddies, or propagation effect of the large-scale magnetic fields, on the small scales, and analysis of this balance in principle also requires consideration of non-equipartition effects.

  6. Do open star clusters evolve towards energy equipartition?

    NASA Astrophysics Data System (ADS)

    Spera, Mario; Mapelli, Michela; Jeffries, Robin D.

    2016-07-01

    We investigate whether open clusters (OCs) tend to energy equipartition, by means of direct N-body simulations with a broken power-law mass function. We find that the simulated OCs become strongly mass segregated, but the local velocity dispersion does not depend on the stellar mass for most of the mass range: the curve of the velocity dispersion as a function of mass is nearly flat even after several half-mass relaxation times, regardless of the adopted stellar evolution recipes and Galactic tidal field model. This result holds both if we start from virialized King models and if we use clumpy sub-virial initial conditions. The velocity dispersion of the most massive stars and stellar remnants tends to be higher than the velocity dispersion of the lighter stars. This trend is particularly evident in simulations without stellar evolution. We interpret this result as a consequence of the strong mass segregation, which leads to Spitzer's instability. Stellar winds delay the onset of the instability. Our simulations strongly support the result that OCs do not attain equipartition, for a wide range of initial conditions.

  7. Equipartition and the Calculation of Temperature in Biomolecular Simulations.

    PubMed

    Eastwood, Michael P; Stafford, Kate A; Lippert, Ross A; Jensen, Morten Ø; Maragakis, Paul; Predescu, Cristian; Dror, Ron O; Shaw, David E

    2010-07-13

    Since the behavior of biomolecules can be sensitive to temperature, the ability to accurately calculate and control the temperature in molecular dynamics (MD) simulations is important. Standard analysis of equilibrium MD simulations-even constant-energy simulations with negligible long-term energy drift-often yields different calculated temperatures for different motions, however, in apparent violation of the statistical mechanical principle of equipartition of energy. Although such analysis provides a valuable warning that other simulation artifacts may exist, it leaves the actual value of the temperature uncertain. We observe that Tolman's generalized equipartition theorem should hold for long stable simulations performed using velocity-Verlet or other symplectic integrators, because the simulated trajectory is thought to sample almost exactly from a continuous trajectory generated by a shadow Hamiltonian. From this we conclude that all motions should share a single simulation temperature, and we provide a new temperature estimator that we test numerically in simulations of a diatomic fluid and of a solvated protein. Apparent temperature variations between different motions observed using standard estimators do indeed disappear when using the new estimator. We use our estimator to better understand how thermostats and barostats can exacerbate integration errors. In particular, we find that with large (albeit widely used) time steps, the common practice of using two thermostats to remedy so-called hot solvent-cold solute problems can have the counterintuitive effect of causing temperature imbalances. Our results, moreover, highlight the utility of multiple-time step integrators for accurate and efficient simulation. PMID:26615934

  8. Manifestly covariant Jüttner distribution and equipartition theorem

    NASA Astrophysics Data System (ADS)

    Chacón-Acosta, Guillermo; Dagdug, Leonardo; Morales-Técotl, Hugo A.

    2010-02-01

    The relativistic equilibrium velocity distribution plays a key role in describing several high-energy and astrophysical effects. Recently, computer simulations favored Jüttner’s as the relativistic generalization of Maxwell’s distribution for d=1,2,3 spatial dimensions and pointed to an invariant temperature. In this work, we argue an invariant temperature naturally follows from manifest covariance. We present a derivation of the manifestly covariant Jüttner’s distribution and equipartition theorem. The standard procedure to get the equilibrium distribution as a solution of the relativistic Boltzmann’s equation, which holds for dilute gases, is here adopted. However, contrary to previous analysis, we use Cartesian coordinates in d+1 momentum space, with d spatial components. The use of the multiplication theorem of Bessel functions turns crucial to regain the known invariant form of Jüttner’s distribution. Since equilibrium kinetic-theory results should agree with thermodynamics in the comoving frame to the gas the covariant pseudonorm of a vector entering the distribution can be identified with the reciprocal of temperature in such comoving frame. Then by combining the covariant statistical moments of Jüttner’s distribution a form of the equipartition theorem is advanced which also accommodates the invariant comoving temperature and it contains, as a particular case, a previous not manifestly covariant form.

  9. Do open star clusters evolve toward energy equipartition?

    NASA Astrophysics Data System (ADS)

    Spera, Mario; Mapelli, Michela; Jeffries, Robin D.

    2016-04-01

    We investigate whether open clusters (OCs) tend to energy equipartition, by means of direct N-body simulations with a broken power-law mass function. We find that the simulated OCs become strongly mass segregated, but the local velocity dispersion does not depend on the stellar mass for most of the mass range: the curve of the velocity dispersion as a function of mass is nearly flat even after several half-mass relaxation times, regardless of the adopted stellar evolution recipes and Galactic tidal field model. This result holds both if we start from virialized King models and if we use clumpy sub-virial initial conditions. The velocity dispersion of the most massive stars and stellar remnants tends to be higher than the velocity dispersion of the lighter stars. This trend is particularly evident in simulations without stellar evolution. We interpret this result as a consequence of the strong mass segregation, which leads to Spitzer's instability. Stellar winds delay the onset of the instability. Our simulations strongly support the result that OCs do not attain equipartition, for a wide range of initial conditions.

  10. Turbulent equipartition pinch of toroidal momentum in spherical torus

    NASA Astrophysics Data System (ADS)

    Hahm, T. S.; Lee, J.; Wang, W. X.; Diamond, P. H.; Choi, G. J.; Na, D. H.; Na, Y. S.; Chung, K. J.; Hwang, Y. S.

    2014-12-01

    We present a new analytic expression for turbulent equipartition (TEP) pinch of toroidal angular momentum originating from magnetic field inhomogeneity of spherical torus (ST) plasmas. Starting from a conservative modern nonlinear gyrokinetic equation (Hahm et al 1988 Phys. Fluids 31 2670), we derive an expression for pinch to momentum diffusivity ratio without using a usual tokamak approximation of B ∝ 1/R which has been previously employed for TEP momentum pinch derivation in tokamaks (Hahm et al 2007 Phys. Plasmas 14 072302). Our new formula is evaluated for model equilibria of National Spherical Torus eXperiment (NSTX) (Ono et al 2001 Nucl. Fusion 41 1435) and Versatile Experiment Spherical Torus (VEST) (Chung et al 2013 Plasma Sci. Technol. 15 244) plasmas. Our result predicts stronger inward pinch for both cases, as compared to the prediction based on the tokamak formula.

  11. Control system design method

    DOEpatents

    Wilson, David G.; Robinett, III, Rush D.

    2012-02-21

    A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.

  12. The Generalized Asymptotic Equipartition Property: Necessary and Sufficient Conditions

    PubMed Central

    Harrison, Matthew T.

    2011-01-01

    Suppose a string X1n=(X1,X2,…,Xn) generated by a memoryless source (Xn)n≥1 with distribution P is to be compressed with distortion no greater than D ≥ 0, using a memoryless random codebook with distribution Q. The compression performance is determined by the “generalized asymptotic equipartition property” (AEP), which states that the probability of finding a D-close match between X1n and any given codeword Y1n, is approximately 2−nR(P, Q, D), where the rate function R(P, Q, D) can be expressed as an infimum of relative entropies. The main purpose here is to remove various restrictive assumptions on the validity of this result that have appeared in the recent literature. Necessary and sufficient conditions for the generalized AEP are provided in the general setting of abstract alphabets and unbounded distortion measures. All possible distortion levels D ≥ 0 are considered; the source (Xn)n≥1 can be stationary and ergodic; and the codebook distribution can have memory. Moreover, the behavior of the matching probability is precisely characterized, even when the generalized AEP is not valid. Natural characterizations of the rate function R(P, Q, D) are established under equally general conditions. PMID:21614133

  13. MODIFIED EQUIPARTITION CALCULATION FOR SUPERNOVA REMNANTS. CASES α = 0.5 AND α = 1

    SciTech Connect

    Arbutina, B.; Urošević, D.; Vučetić, M. M.; Pavlović, M. Z.; Vukotić, B.

    2013-11-01

    The equipartition or minimum energy calculation is a well-known procedure for estimating the magnetic field strength and the total energy in the magnetic field and cosmic ray particles by using only the radio synchrotron emission. In one of our previous papers, we have offered a modified equipartition calculation for supernova remnants (SNRs) with spectral indices 0.5 < α < 1. Here we extend the analysis to SNRs with α = 0.5 and α = 1.

  14. The FEM-2 design method

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.; Adams, L. M.; Mehrotra, P.; Vanrosendale, J.; Voigt, R. G.; Patrick, M.

    1983-01-01

    The FEM-2 parallel computer is designed using methods differing from those ordinarily employed in parallel computer design. The major distinguishing aspects are: (1) a top-down rather than bottom-up design process; (2) the design considers the entire system structure in terms of layers of virtual machines; and (3) each layer of virtual machine is defined formally during the design process. The result is a complete hardware/software system design. The basic design method is discussed and the advantages of the method are considered. A status report on the FEM-2 design is included.

  15. Aircraft digital control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  16. Aircraft digital control design methods

    NASA Technical Reports Server (NTRS)

    Tashker, M. G.; Powell, J. D.

    1975-01-01

    Investigations were conducted in two main areas: the first area is control system design, and the goals were to define the limits of 'digitized S-Plane design techniques' vs. sample rate, to show the results of a 'direct digital design technique', and to compare the two methods; the second area was to evaluate the roughness of autopilot designs parametrically versus sample rate. Goals of the first area were addressed by (1) an analysis of a 2nd order example using both design methods, (2) a linear analysis of the complete 737 aircraft with an autoland obtained using the digitized S-plane technique, (3) linear analysis of a high frequency 737 approximation with the autoland from a direct digital design technique, and (4) development of a simulation for evaluation of the autopilots with disturbances and nonlinearities included. Roughness evaluation was studied by defining an experiment to be carried out on the Langley motion simulator and coordinated with analysis at Stanford.

  17. Screening Enhancement of Energy Equipartition in a Strongly Magnetized Nonneutral Plasma.

    NASA Astrophysics Data System (ADS)

    Bollinger, J.; Dubin, D.

    2004-11-01

    An analogy is uncovered between the nuclear reaction rate in a dense plasma and the energy equipartition rate in a strongly-correlated (Γ = e^2 / aT ≫ 1) strongly-magnetized (κ = e^2 Ωc / \\overlinev T ≫ 1) nonneutral plasma. [Here \\overlinev = √T/m.] When κ ≫ 1, cyclotron energy is an adiabatic invariant. This energy is shared with other degrees of freedom only through rare close collisions that break the invariant. If Γ > 1, the probability of such close collisions is greatly enhanced because surrounding charges screen the colliding pair. In the regime Γ < κ^(2/5), we find that the equipartition rate ν defined by d Tc /dt = ν (T - T_c) (where Tc is the cyclotron temperature) is the rate without screening(M.E. Glinsky et al.), Phys. Fluids B 4, 1156 (1992). multiplied by an enhancement factor f (Γ). Interestingly, f(Γ ) is identical to the enhancement factor appearing in the theory of nuclear reaction rates in dense plasmas.(E.E. Salpeter and H. Van Horn, Ap. J. 155), 183 (1969). We present molecular dynamics simulations of equipartition. Rate enhancements of up to 10^10 are measured. The greatly enhanced rate may help to explain recent experiments that observed rapid equipartition in a Be^+ plasma.(Jensen et al., submitted to PRL. See also the adjacent poster.)

  18. On the Equipartition of Kinetic Energy in an Ideal Gas Mixture

    ERIC Educational Resources Information Center

    Peliti, L.

    2007-01-01

    A refinement of an argument due to Maxwell for the equipartition of translational kinetic energy in a mixture of ideal gases with different masses is proposed. The argument is elementary, yet it may work as an illustration of the role of symmetry and independence postulates in kinetic theory. (Contains 1 figure.)

  19. Remarks on the Equipartition Rule and Thermodynamics of Reissner-Nordstrom Black Holes

    NASA Astrophysics Data System (ADS)

    Chen, Deyou

    2014-07-01

    In Verlinde's work, gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. In this paper, we investigate the thermodynamic property of Reissner-Nordstrom black holes from the equipartition rule and holographic scenario. As a result, the first law of thermodynamics of the black holes is recovered.

  20. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  1. Design method of supercavitating pumps

    NASA Astrophysics Data System (ADS)

    Kulagin, V.; Likhachev, D.; Li, F. C.

    2016-05-01

    The problem of effective supercavitating (SC) pump is solved, and optimum load distribution along the radius of the blade is found taking into account clearance, degree of cavitation development, influence of finite number of blades, and centrifugal forces. Sufficient accuracy can be obtained using the equivalent flat SC-grid for design of any SC-mechanisms, applying the “grid effect” coefficient and substituting the skewed flow calculated for grids of flat plates with the infinite attached cavitation caverns. This article gives the universal design method and provides an example of SC-pump design.

  2. The relationship between noise correlation and the Green's function in the presence of degeneracy and the absence of equipartition

    USGS Publications Warehouse

    Tsai, V.C.

    2010-01-01

    Recent derivations have shown that when noise in a physical system has its energy equipartitioned into the modes of the system, there is a convenient relationship between the cross correlation of time-series recorded at two points and the Green's function of the system. Here, we show that even when energy is not fully equipartitioned and modes are allowed to be degenerate, a similar (though less general) property holds for equations with wave equation structure. This property can be used to understand why certain seismic noise correlation measurements are successful despite known degeneracy and lack of equipartition on the Earth. No claim to original US government works Journal compilation ?? 2010 RAS.

  3. Modelling the structure of molecular clouds - I. A multiscale energy equipartition

    NASA Astrophysics Data System (ADS)

    Veltchev, Todor V.; Donkov, Sava; Klessen, Ralf S.

    2016-07-01

    We present a model for describing the general structure of molecular clouds (MCs) at early evolutionary stages in terms of their mass-size relationship. Sizes are defined through threshold levels at which equipartitions between gravitational, turbulent and thermal energy |W| ˜ f(Ekin + Eth) take place, adopting interdependent scaling relations of velocity dispersion and density and assuming a lognormal density distribution at each scale. Variations of the equipartition coefficient 1 ≤ f ≤ 4 allow for modelling of star-forming regions at scales within the size range of typical MCs (≳4 pc). Best fits are obtained for regions with low or no star formation (Pipe, Polaris) as well for such with star-forming activity but with nearly lognormal distribution of column density (Rosette). An additional numerical test of the model suggests its applicability to cloud evolutionary times prior to the formation of first stars.

  4. DISPLACEMENT BASED SEISMIC DESIGN METHODS.

    SciTech Connect

    HOFMAYER,C.MILLER,C.WANG,Y.COSTELLO,J.

    2003-07-15

    A research effort was undertaken to determine the need for any changes to USNRC's seismic regulatory practice to reflect the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The research explored the extent to which displacement based seismic design methods, such as given in FEMA 273, could be useful for reviewing nuclear power stations. Two structures common to nuclear power plants were chosen to compare the results of the analysis models used. The first structure is a four-story frame structure with shear walls providing the primary lateral load system, referred herein as the shear wall model. The second structure is the turbine building of the Diablo Canyon nuclear power plant. The models were analyzed using both displacement based (pushover) analysis and nonlinear dynamic analysis. In addition, for the shear wall model an elastic analysis with ductility factors applied was also performed. The objectives of the work were to compare the results between the analyses, and to develop insights regarding the work that would be needed before the displacement based analysis methodology could be considered applicable to facilities licensed by the NRC. A summary of the research results, which were published in NUREGICR-6719 in July 2001, is presented in this paper.

  5. Comment on 'Turbulent equipartition theory of toroidal momentum pinch' [Phys. Plasmas 15, 055902 (2008)

    SciTech Connect

    Peeters, A. G.; Angioni, C.; Strintzi, D.

    2009-03-15

    The comment addresses questions raised on the derivation of the momentum pinch velocity due to the Coriolis drift effect [A. G. Peeters et al., Phys. Rev. Lett. 98, 265003 (2007)]. These concern the definition of the gradient, and the scaling with the density gradient length. It will be shown that the turbulent equipartition mechanism is included within the derivation using the Coriolis drift, with the density gradient scaling being the consequence of drift terms not considered in [T. S. Hahm et al., Phys. Plasmas 15, 055902 (2008)]. Finally the accuracy of the analytic models is assessed through a comparison with the full gyrokinetic solution.

  6. Design of diffractive optical surfaces within the SMS design method

    NASA Astrophysics Data System (ADS)

    Mendes-Lopes, João.; Benítez, Pablo; Miñano, Juan C.

    2015-08-01

    The Simultaneous Multiple Surface (SMS) method was initially developed as a design method in Nonimaging Optics and later, the method was extended for designing Imaging Optics. We present the extension of the SMS method to design diffractive optical surfaces. This method involves the simultaneous calculation of N/2 diffractive surfaces, using the phase-shift properties of diffractive surfaces as an extra degree of freedom, such that N one-parameter wavefronts can be perfectly coupled. Moreover, the SMS method for diffractive surfaces is a direct method, i.e., it is not based in multi-parametric optimization techniques. Representative diffractive systems designed by the SMS method are presented.

  7. Phenomenology treatment of magnetohydrodynamic turbulence with non-equipartition and anisotropy

    SciTech Connect

    Zhou, Y; Matthaeus, W H

    2005-02-07

    Magnetohydrodynamics (MHD) turbulence theory, often employed satisfactorily in astrophysical applications, has often focused on parameter ranges that imply nearly equal values of kinetic and magnetic energies and length scales. However, MHD flow may have disparity magnetic Prandtl number, dissimilar kinetic and magnetic Reynolds number, different kinetic and magnetic outer length scales, and strong anisotropy. Here a phenomenology for such ''non-equipartitioned'' MHD flow is discussed. Two conditions are proposed for a MHD flow to transition to strong turbulent flow, extensions of (1) Taylor's constant flux in an inertial range, and (2) Kolmogorov's scale separation between the large and small scale boundaries of an inertial range. For this analysis, the detailed information on turbulence structure is not needed. These two conditions for MHD transition are expected to provide consistent predictions and should be applicable to anisotropic MHD flows, after the length scales are replaced by their corresponding perpendicular components. Second, it is stressed that the dynamics and anisotropy of MHD fluctuations is controlled by the relative strength between the straining effects between eddies of similar size and the sweeping action by the large-eddies, or propagation effect of the large-scale magnetic fields, on the small scales, and analysis of this balance in principle also requires consideration of non-equipartition effects.

  8. An airfoil design method for viscous flows

    NASA Technical Reports Server (NTRS)

    Malone, J. B.; Narramore, J. C.; Sankar, L. N.

    1990-01-01

    An airfoil design procedure is described that has been incorporated into an existing two-dimensional Navier-Stokes airfoil analysis method. The resulting design method, an iterative procedure based on a residual-correction algorithm, permits the automated design of airfoil sections with prescribed surface pressure distributions. This paper describes the inverse design method and the technique used to specify target pressure distributions. An example airfoil design problem is described to demonstrate application of the inverse design procedure. It shows that this inverse design method develops useful airfoil configurations with a reasonable expenditure of computer resources.

  9. Review of freeform TIR collimator design methods

    NASA Astrophysics Data System (ADS)

    Talpur, Taimoor; Herkommer, Alois

    2016-04-01

    Total internal reflection (TIR) collimators are essential illumination components providing high efficiency and uniformity in a compact geometry. Various illumination design methods have been developed for designing such collimators, including tailoring methods, design via optimization, the mapping and feedback method, and the simultaneous multiple surface (SMS) method. This paper provides an overview of the different methods and compares the performance of the methods along with their advantages and their limitations.

  10. Computational methods for stealth design

    SciTech Connect

    Cable, V.P. )

    1992-08-01

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  11. Spacesuit Radiation Shield Design Methods

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Anderson, Brooke M.; Cucinotta, Francis A.; Ware, J.; Zeitlin, Cary J.

    2006-01-01

    Meeting radiation protection requirements during EVA is predominantly an operational issue with some potential considerations for temporary shelter. The issue of spacesuit shielding is mainly guided by the potential of accidental exposure when operational and temporary shelter considerations fail to maintain exposures within operational limits. In this case, very high exposure levels are possible which could result in observable health effects and even be life threatening. Under these assumptions, potential spacesuit radiation exposures have been studied using known historical solar particle events to gain insight on the usefulness of modification of spacesuit design in which the control of skin exposure is a critical design issue and reduction of blood forming organ exposure is desirable. Transition to a new spacesuit design including soft upper-torso and reconfigured life support hardware gives an opportunity to optimize the next generation spacesuit for reduced potential health effects during an accidental exposure.

  12. Computational Methods in Nanostructure Design

    NASA Astrophysics Data System (ADS)

    Bellesia, Giovanni; Lampoudi, Sotiria; Shea, Joan-Emma

    Self-assembling peptides can serve as building blocks for novel biomaterials. Replica exchange molecular dynamics simulations are a powerful means to probe the conformational space of these peptides. We discuss the theoretical foundations of this enhanced sampling method and its use in biomolecular simulations. We then apply this method to determine the monomeric conformations of the Alzheimer amyloid-β(12-28) peptide that can serve as initiation sites for aggregation.

  13. Influence of equipartitioning on the emittance of intense charged-particle beams

    SciTech Connect

    Wangler, T.P.; Guy, F.W.; Hofmann, I.

    1986-01-01

    We combine the ideas of kinetic energy equipartitioning and nonlinear field energy to obtain a quantitative description for rms emittance changes induced in intense beams with two degrees of freedom. We derive equations for emittance change in each plane for continuous elliptical beams and axially symmetric bunched beams, with arbitrary initial charge distributions within a constant focusing channel. The complex details of the mechanisms leading to kinetic energy transfer are not necessary to obtain the formulas. The resulting emittance growth equations contain two separate terms: the first describes emittance changes associated with the transfer of energy between the two planes; the second describes emittance growth associated with the transfer of nonlinear field energy into kinetic energy as the charge distribution changes.

  14. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  15. HTGR analytical methods and design verification

    SciTech Connect

    Neylan, A.J.; Northup, T.E.

    1982-05-01

    Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier.

  16. Applications of a transonic wing design method

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Smith, Leigh A.

    1989-01-01

    A method for designing wings and airfoils at transonic speeds using a predictor/corrector approach was developed. The procedure iterates between an aerodynamic code, which predicts the flow about a given geometry, and the design module, which compares the calculated and target pressure distributions and modifies the geometry using an algorithm that relates differences in pressure to a change in surface curvature. The modular nature of the design method makes it relatively simple to couple it to any analysis method. The iterative approach allows the design process and aerodynamic analysis to converge in parallel, significantly reducing the time required to reach a final design. Viscous and static aeroelastic effects can also be accounted for during the design or as a post-design correction. Results from several pilot design codes indicated that the method accurately reproduced pressure distributions as well as the coordinates of a given airfoil or wing by modifying an initial contour. The codes were applied to supercritical as well as conventional airfoils, forward- and aft-swept transport wings, and moderate-to-highly swept fighter wings. The design method was found to be robust and efficient, even for cases having fairly strong shocks.

  17. Impeller blade design method for centrifugal compressors

    NASA Technical Reports Server (NTRS)

    Jansen, W.; Kirschner, A. M.

    1974-01-01

    The design of a centrifugal impeller with blades that are aerodynamically efficient, easy to manufacture, and mechanically sound is discussed. The blade design method described here satisfies the first two criteria and with a judicious choice of certain variables will also satisfy stress considerations. The blade shape is generated by specifying surface velocity distributions and consists of straight-line elements that connect points at hub and shroud. The method may be used to design radially elemented and backward-swept blades. The background, a brief account of the theory, and a sample design are described.

  18. Model reduction methods for control design

    NASA Technical Reports Server (NTRS)

    Dunipace, K. R.

    1988-01-01

    Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.

  19. Mixed Methods Research Designs in Counseling Psychology

    ERIC Educational Resources Information Center

    Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.

    2005-01-01

    With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…

  20. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  1. Development of a hydraulic turbine design method

    NASA Astrophysics Data System (ADS)

    Kassanos, Ioannis; Anagnostopoulos, John; Papantonis, Dimitris

    2013-10-01

    In this paper a hydraulic turbine parametric design method is presented which is based on the combination of traditional methods and parametric surface modeling techniques. The blade of the turbine runner is described using Bezier surfaces for the definition of the meridional plane as well as the blade angle distribution, and a thickness distribution applied normal to the mean blade surface. In this way, it is possible to define parametrically the whole runner using a relatively small number of design parameters, compared to conventional methods. The above definition is then combined with a commercial CFD software and a stochastic optimization algorithm towards the development of an automated design optimization procedure. The process is demonstrated with the design of a Francis turbine runner.

  2. Preliminary aerothermodynamic design method for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Harloff, G. J.; Petrie, S. L.

    1987-01-01

    Preliminary design methods are presented for vehicle aerothermodynamics. Predictions are made for Shuttle orbiter, a Mach 6 transport vehicle and a high-speed missile configuration. Rapid and accurate methods are discussed for obtaining aerodynamic coefficients and heat transfer rates for laminar and turbulent flows for vehicles at high angles of attack and hypersonic Mach numbers.

  3. Combinatorial protein design strategies using computational methods.

    PubMed

    Kono, Hidetoshi; Wang, Wei; Saven, Jeffery G

    2007-01-01

    Computational methods continue to facilitate efforts in protein design. Most of this work has focused on searching sequence space to identify one or a few sequences compatible with a given structure and functionality. Probabilistic computational methods provide information regarding the range of amino acid variability permitted by desired functional and structural constraints. Such methods may be used to guide the construction of both individual sequences and combinatorial libraries of proteins. PMID:17041256

  4. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  5. Axisymmetric inlet minimum weight design method

    NASA Technical Reports Server (NTRS)

    Nadell, Shari-Beth

    1995-01-01

    An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.

  6. Analysis Method for Quantifying Vehicle Design Goals

    NASA Technical Reports Server (NTRS)

    Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael

    2007-01-01

    A document discusses a method for using Design Structure Matrices (DSM), coupled with high-level tools representing important life-cycle parameters, to comprehensively conceptualize a flight/ground space transportation system design by dealing with such variables as performance, up-front costs, downstream operations costs, and reliability. This approach also weighs operational approaches based on their effect on upstream design variables so that it is possible to readily, yet defensively, establish linkages between operations and these upstream variables. To avoid the large range of problems that have defeated previous methods of dealing with the complex problems of transportation design, and to cut down the inefficient use of resources, the method described in the document identifies those areas that are of sufficient promise and that provide a higher grade of analysis for those issues, as well as the linkages at issue between operations and other factors. Ultimately, the system is designed to save resources and time, and allows for the evolution of operable space transportation system technology, and design and conceptual system approach targets.

  7. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  8. A sociotechnical method for designing work systems.

    PubMed

    Waterson, Patrick E; Older Gray, Melanie T; Clegg, Chris W

    2002-01-01

    The paper describes a new method for allocating work between and among humans and machines. The method consists of a series of stages, which cover how the overall work system should be organized and designed; how tasks within the work system should be allocated (human-human allocations); and how tasks involving the use of technology should be allocated (human-machine allocations). The method makes use of a series of decision criteria that allow end users to consider a range of factors relevant to function allocation, including aspects of job, organizational, and technological design. The method is described in detail using an example drawn from a workshop involving the redesign of a naval command and control (C2) subsystem. We also report preliminary details of the evaluation of the method, based on the views of participants at the workshop. A final section outlines the contribution of the work in terms of current theoretical developments within the domain of function allocation. The method has been applied to the domain of naval C2 systems; however, it is also designed for generic use within function allocation and sociotechnical work systems. PMID:12502156

  9. Standardized Radiation Shield Design Methods: 2005 HZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.

    2006-01-01

    Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.

  10. MAST Propellant and Delivery System Design Methods

    NASA Technical Reports Server (NTRS)

    Nadeem, Uzair; Mc Cleskey, Carey M.

    2015-01-01

    A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.

  11. Acoustic Treatment Design Scaling Methods. Phase 2

    NASA Technical Reports Server (NTRS)

    Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.

    2003-01-01

    The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.

  12. 3. 6 simplified methods for design

    SciTech Connect

    Nickell, R.E.; Yahr, G.T.

    1981-01-01

    Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed.

  13. Geometric methods for the design of mechanisms

    NASA Astrophysics Data System (ADS)

    Stokes, Ann Westagard

    1993-01-01

    Challenges posed by the process of designing robotic mechanisms have provided a new impetus to research in the classical subjects of kinematics, elastic analysis, and multibody dynamics. Historically, mechanism designers have considered these areas of analysis to be generally separate and distinct sciences. However, there are significant classes of problems which require a combination of these methods to arrive at a satisfactory solution. For example, both the compliance and the inertia distribution strongly influence the performance of a robotic manipulator. In this thesis, geometric methods are applied to the analysis of mechanisms where kinematics, elasticity, and dynamics play fundamental and interactive roles. Tools for the mathematical analysis, design, and optimization of a class of holonomic and nonholonomic mechanisms are developed. Specific contributions of this thesis include a network theory for elasto-kinematic systems. The applicability of the network theory is demonstrated by employing it to calculate the optimal distribution of joint compliance in a serial manipulator. In addition, the advantage of applying Lie group theoretic approaches to mechanisms requiring specific dynamic properties is demonstrated by extending Brockett's product of exponentials formula to the domain of dynamics. Conditions for the design of manipulators having inertia matrices which are constant in joint angle coordinates are developed. Finally, analysis and design techniques are developed for a class of mechanisms which rectify oscillations into secular motions. These techniques are applied to the analysis of free-floating chains that can reorient themselves in zero angular momentum processes and to the analysis of rattleback tops.

  14. Reliability Methods for Shield Design Process

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.

    2002-01-01

    Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.

  15. Near-equipartition Jets with Log-parabola Electron Energy Distribution and the Blazar Spectral-index Diagrams

    NASA Astrophysics Data System (ADS)

    Dermer, Charles D.; Yan, Dahai; Zhang, Li; Finke, Justin D.; Lott, Benoit

    2015-08-01

    Fermi-LAT analyses show that the γ-ray photon spectral indices {{{Γ }}}γ of a large sample of blazars correlate with the ν {F}ν peak synchrotron frequency {ν }s according to the relation {{{Γ }}}γ =d-k{log} {ν }s. The same function, with different constants d and k, also describes the relationship between {{{Γ }}}γ and peak Compton frequency {ν }{{C}}. This behavior is derived analytically using an equipartition blazar model with a log-parabola description of the electron energy distribution (EED). In the Thomson regime, k={k}{EC}=3b/4 for external Compton (EC) processes and k={k}{SSC}=9b/16 for synchrotron self-Compton (SSC) processes, where b is the log-parabola width parameter of the EED. The BL Lac object Mrk 501 is fit with a synchrotron/SSC model given by the log-parabola EED, and is best fit away from equipartition. Corrections are made to the spectral-index diagrams for a low-energy power-law EED and departures from equipartition, as constrained by absolute jet power. Analytic expressions are compared with numerical values derived from self-Compton and EC scattered γ-ray spectra from Lyα broad-line region and IR target photons. The {{{Γ }}}γ versus {ν }s behavior in the model depends strongly on b, with progressively and predictably weaker dependences on γ-ray detection range, variability time, and isotropic γ-ray luminosity. Implications for blazar unification and blazars as ultra-high energy cosmic-ray sources are discussed. Arguments by Ghisellini et al. that the jet power exceeds the accretion luminosity depend on the doubtful assumption that we are viewing at the Doppler angle.

  16. Waterflooding injectate design systems and methods

    SciTech Connect

    Brady, Patrick V.; Krumhansl, James L.

    2014-08-19

    A method of designing an injectate to be used in a waterflooding operation is disclosed. One aspect includes specifying data representative of chemical characteristics of a liquid hydrocarbon, a connate, and a reservoir rock, of a subterranean reservoir. Charged species at an interface of the liquid hydrocarbon are determined based on the specified data by evaluating at least one chemical reaction. Charged species at an interface of the reservoir rock are determined based on the specified data by evaluating at least one chemical reaction. An extent of surface complexation between the charged species at the interfaces of the liquid hydrocarbon and the reservoir rock is determined by evaluating at least one surface complexation reaction. The injectate is designed and is operable to decrease the extent of surface complexation between the charged species at interfaces of the liquid hydrocarbon and the reservoir rock. Other methods, apparatus, and systems are disclosed.

  17. An improved design method for EPC middleware

    NASA Astrophysics Data System (ADS)

    Lou, Guohuan; Xu, Ran; Yang, Chunming

    2014-04-01

    For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.

  18. Design methods of rhombic tensegrity structures

    NASA Astrophysics Data System (ADS)

    Feng, Xi-Qiao; Li, Yue; Cao, Yan-Ping; Yu, Shou-Wen; Gu, Yuan-Tong

    2010-08-01

    As a special type of novel flexible structures, tensegrity holds promise for many potential applications in such fields as materials science, biomechanics, civil and aerospace engineering. Rhombic systems are an important class of tensegrity structures, in which each bar constitutes the longest diagonal of a rhombus of four strings. In this paper, we address the design methods of rhombic structures based on the idea that many tensegrity structures can be constructed by assembling one-bar elementary cells. By analyzing the properties of rhombic cells, we first develop two novel schemes, namely, direct enumeration scheme and cell-substitution scheme. In addition, a facile and efficient method is presented to integrate several rhombic systems into a larger tensegrity structure. To illustrate the applications of these methods, some novel rhombic tensegrity structures are constructed.

  19. Direct optimization method for reentry trajectory design

    NASA Astrophysics Data System (ADS)

    Jallade, S.; Huber, P.; Potti, J.; Dutruel-Lecohier, G.

    The software package called `Reentry and Atmospheric Transfer Trajectory' (RATT) was developed under ESA contract for the design of atmospheric trajectories. It includes four software TOP (Trajectory OPtimization) programs, which optimize reentry and aeroassisted transfer trajectories. 6FD and 3FD (6 and 3 degrees of freedom Flight Dynamic) are devoted to the simulation of the trajectory. SCA (Sensitivity and Covariance Analysis) performs covariance analysis on a given trajectory with respect to different uncertainties and error sources. TOP provides the optimum guidance law of a three degree of freedom reentry of aeroassisted transfer (AAOT) trajectories. Deorbit and reorbit impulses (if necessary) can be taken into account in the optimization. A wide choice of cost function is available to the user such as the integrated heat flux, or the sum of the velocity impulses, or a linear combination of both of them for trajectory and vehicle design. The crossrange and the downrange can be maximized during reentry trajectory. Path constraints are available on the load factor, the heat flux and the dynamic pressure. Results on these proposed options are presented. TOPPHY is the part of the TOP software corresponding to the definition and the computation of the optimization problemphysics. TOPPHY can interface with several optimizes with dynamic solvers: TOPOP and TROPIC using direct collocation methods and PROMIS using direct multiple shooting method. TOPOP was developed in the frame of this contract, it uses Hermite polynomials for the collocation method and the NPSOL optimizer from the NAG library. Both TROPIC and PROMIS were developed by the DLR (Deutsche Forschungsanstalt fuer Luft und Raumfahrt) and use the SLSQP optimizer. For the dynamic equation resolution, TROPIC uses a collocation method with Splines and PROMIS uses a multiple shooting method with finite differences. The three different optimizers including dynamics were tested on the reentry trajectory of the

  20. Methods for structural design at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Ellison, A. M.; Jones, W. E., Jr.; Leimbach, K. R.

    1973-01-01

    A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range.

  1. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  2. Design Method and Calibration of Moulinet

    NASA Astrophysics Data System (ADS)

    Itoh, Hirokazu; Yamada, Hirokazu; Udagawa, Sinsuke

    The formula for obtaining the absorption horsepower of a Moulinet was rewritten, and the physical meaning of the constant in the formula was clarified. Based on this study, the design method of the Moulinet and the calibration method of the Moulinet that was performed after manufacture were verified experimentally. Consequently, the following was clarified; (1) If the propeller power coefficient was taken to be the proportionality constant, the absorption horsepower of the Moulinet was proportional to the cube of the revolution speed, and the fifth power of the Moulinet diameter. (2) If the Moulinet design was geometrically similar to the standard dimensions of the Aviation Technical Research Center's type-6 Moulinet, the proportionality constant C1 given in the reference could be used, and the absorption horsepower of the Moulinet was proportional to the cube of the revolution speed, the cube of the Moulinet diameter, and the side projection area of the Moulinet. (3) The proportionality constant C1 was proportional to the propeller power coefficient CP.

  3. A structural design decomposition method utilizing substructuring

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1994-01-01

    A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.

  4. Method for designing gas tag compositions

    DOEpatents

    Gross, Kenny C.

    1995-01-01

    For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node #1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node #2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred.

  5. Method for designing gas tag compositions

    DOEpatents

    Gross, K.C.

    1995-04-11

    For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node No. 1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node No. 2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred. 5 figures.

  6. Research and Design of Rootkit Detection Method

    NASA Astrophysics Data System (ADS)

    Liu, Leian; Yin, Zuanxing; Shen, Yuli; Lin, Haitao; Wang, Hongjiang

    Rootkit is one of the most important issues of network communication systems, which is related to the security and privacy of Internet users. Because of the existence of the back door of the operating system, a hacker can use rootkit to attack and invade other people's computers and thus he can capture passwords and message traffic to and from these computers easily. With the development of the rootkit technology, its applications are more and more extensive and it becomes increasingly difficult to detect it. In addition, for various reasons such as trade secrets, being difficult to be developed, and so on, the rootkit detection technology information and effective tools are still relatively scarce. In this paper, based on the in-depth analysis of the rootkit detection technology, a new kind of the rootkit detection structure is designed and a new method (software), X-Anti, is proposed. Test results show that software designed based on structure proposed is much more efficient than any other rootkit detection software.

  7. Game Methodology for Design Methods and Tools Selection

    ERIC Educational Resources Information Center

    Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois

    2014-01-01

    Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…

  8. Translating Vision into Design: A Method for Conceptual Design Development

    NASA Technical Reports Server (NTRS)

    Carpenter, Joyce E.

    2003-01-01

    One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.

  9. An inverse design method for 2D airfoil

    NASA Astrophysics Data System (ADS)

    Liang, Zhi-Yong; Cui, Peng; Zhang, Gen-Bao

    2010-03-01

    The computational method for aerodynamic design of aircraft is applied more universally than before, in which the design of an airfoil is a hot problem. The forward problem is discussed by most relative papers, but inverse method is more useful in practical designs. In this paper, the inverse design of 2D airfoil was investigated. A finite element method based on the variational principle was used for carrying out. Through the simulation, it was shown that the method was fit for the design.

  10. Using Software Design Methods in CALL

    ERIC Educational Resources Information Center

    Ward, Monica

    2006-01-01

    The phrase "software design" is not one that arouses the interest of many CALL practitioners, particularly those from a humanities background. However, software design essentials are simply logical ways of going about designing a system. The fundamentals include modularity, anticipation of change, generality and an incremental approach. While CALL…

  11. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  12. An Efficient Inverse Aerodynamic Design Method For Subsonic Flows

    NASA Technical Reports Server (NTRS)

    Milholen, William E., II

    2000-01-01

    Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.

  13. Design optimization method for Francis turbine

    NASA Astrophysics Data System (ADS)

    Kawajiri, H.; Enomoto, Y.; Kurosawa, S.

    2014-03-01

    This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.

  14. Alternative methods for the design of jet engine control systems

    NASA Technical Reports Server (NTRS)

    Sain, M. K.; Leake, R. J.; Basso, R.; Gejji, R.; Maloney, A.; Seshadri, V.

    1976-01-01

    Various alternatives to linear quadratic design methods for jet engine control systems are discussed. The main alternatives are classified into two broad categories: nonlinear global mathematical programming methods and linear local multivariable frequency domain methods. Specific studies within these categories include model reduction, the eigenvalue locus method, the inverse Nyquist method, polynomial design, dynamic programming, and conjugate gradient approaches.

  15. Demystifying Mixed Methods Research Design: A Review of the Literature

    ERIC Educational Resources Information Center

    Caruth, Gail D.

    2013-01-01

    Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…

  16. Airfoil design method using the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Malone, J. B.; Narramore, J. C.; Sankar, L. N.

    1991-01-01

    An airfoil design procedure is described that was incorporated into an existing 2-D Navier-Stokes airfoil analysis method. The resulting design method, an iterative procedure based on a residual-correction algorithm, permits the automated design of airfoil sections with prescribed surface pressure distributions. The inverse design method and the technique used to specify target pressure distributions are described. It presents several example problems to demonstrate application of the design procedure. It shows that this inverse design method develops useful airfoil configurations with a reasonable expenditure of computer resources.

  17. A Method of Integrated Description of Design Information for Reusability

    NASA Astrophysics Data System (ADS)

    Tsumaya, Akira; Nagae, Masao; Wakamatsu, Hidefumi; Shirase, Keiichi; Arai, Eiji

    Much of product design is executed concurrently these days. For such concurrent design, the method which can share and ueuse varioud kind of design information among designers is needed. However, complete understanding of the design information among designers have been a difficult issue. In this paper, design process model with use of designers’ intention is proposed. A method to combine the design process information and the design object information is also proposed. We introduce how to describe designers’ intention by providing some databases. Keyword Database consists of ontological data related to design object/activities. Designers select suitable keyword(s) from Keyword Database and explain the reason/ideas for their design activities by the description with use of keyword(s). We also developed the integration design information management system architecture by using a method of integrated description with designers’ intension. This system realizes connections between the information related to design process and that related to design object through designers’ intention. Designers can communicate with each other to understand how others make decision in design through that. Designers also can re-use both design process information data and design object information data through detabase management sub-system.

  18. JASMINE design and method of data reduction

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito

    2008-07-01

    Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.

  19. Lithography aware overlay metrology target design method

    NASA Astrophysics Data System (ADS)

    Lee, Myungjun; Smith, Mark D.; Lee, Joonseuk; Jung, Mirim; Lee, Honggoo; Kim, Youngsik; Han, Sangjun; Adel, Michael E.; Lee, Kangsan; Lee, Dohwa; Choi, Dongsub; Liu, Zephyr; Itzkovich, Tal; Levinski, Vladimir; Levy, Ady

    2016-03-01

    We present a metrology target design (MTD) framework based on co-optimizing lithography and metrology performance. The overlay metrology performance is strongly related to the target design and optimizing the target under different process variations in a high NA optical lithography tool and measurement conditions in a metrology tool becomes critical for sub-20nm nodes. The lithography performance can be quantified by device matching and printability metrics, while accuracy and precision metrics are used to quantify the metrology performance. Based on using these metrics, we demonstrate how the optimized target can improve target printability while maintaining the good metrology performance for rotated dipole illumination used for printing a sub-100nm diagonal feature in a memory active layer. The remaining challenges and the existing tradeoff between metrology and lithography performance are explored with the metrology target designer's perspective. The proposed target design framework is completely general and can be used to optimize targets for different lithography conditions. The results from our analysis are both physically sensible and in good agreement with experimental results.

  20. Probabilistic Methods for Structural Design and Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)

    2002-01-01

    This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.

  1. A comparison of digital flight control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Many variations in design methods for aircraft digital flight control have been proposed in the literature. In general, the methods fall into two categories: those where the design is done in the continuous domain (or s-plane), and those where the design is done in the discrete domain (or z-plane). This paper evaluates several variations of each category and compares them for various flight control modes of the Langley TCV Boeing 737 aircraft. Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the 'uncompensated s-plane design' method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  2. Soft Computing Methods in Design of Superalloys

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  3. A Mechanistic Neural Field Theory of How Anesthesia Suppresses Consciousness: Synaptic Drive Dynamics, Bifurcations, Attractors, and Partial State Equipartitioning.

    PubMed

    Hou, Saing Paul; Haddad, Wassim M; Meskin, Nader; Bailey, James M

    2015-12-01

    With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory. PMID:26438186

  4. An overview of very high level software design methods

    NASA Technical Reports Server (NTRS)

    Asdjodi, Maryam; Hooper, James W.

    1988-01-01

    Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.

  5. The Triton: Design concepts and methods

    NASA Technical Reports Server (NTRS)

    Meholic, Greg; Singer, Michael; Vanryn, Percy; Brown, Rhonda; Tella, Gustavo; Harvey, Bob

    1992-01-01

    During the design of the C & P Aerospace Triton, a few problems were encountered that necessitated changes in the configuration. After the initial concept phase, the aspect ratio was increased from 7 to 7.6 to produce a greater lift to drag ratio (L/D = 13) which satisfied the horsepower requirements (118 hp using the Lycoming O-235 engine). The initial concept had a wing planform area of 134 sq. ft. Detailed wing sizing analysis enlarged the planform area to 150 sq. ft., without changing its layout or location. The most significant changes, however, were made just prior to inboard profile design. The fuselage external diameter was reduced from 54 to 50 inches to reduce drag to meet the desired cruise speed of 120 knots. Also, the nose was extended 6 inches to accommodate landing gear placement. Without the extension, the nosewheel received an unacceptable percentage (25 percent) of the landing weight. The final change in the configuration was made in accordance with the stability and control analysis. In order to reduce the static margin from 20 to 13 percent, the horizontal tail area was reduced from 32.02 to 25.0 sq. ft. The Triton meets all the specifications set forth in the design criteria. If time permitted another iteration of the calculations, two significant changes would be made. The vertical stabilizer area would be reduced to decrease the aircraft lateral stability slope since the current value was too high in relation to the directional stability slope. Also, the aileron size would be decreased to reduce the roll rate below the current 106 deg/second. Doing so would allow greater flap area (increasing CL(sub max)) and thus reduce the overall wing area. C & P would also recalculate the horsepower and drag values to further validate the 120 knot cruising speed.

  6. A survey on methods of design features identification

    NASA Astrophysics Data System (ADS)

    Grabowik, C.; Kalinowski, K.; Paprocka, I.; Kempa, W.

    2015-11-01

    It is widely accepted that design features are one of the most attractive integration method of most fields of engineering activities such as a design modelling, process planning or production scheduling. One of the most important tasks which are realized in the integration process of design and planning functions is a design translation meant as design data mapping into data which are important from process planning needs point of view, it is manufacturing data. A design geometrical shape translation process can be realized with application one of the following strategies: (i) designing with previously prepared design features library also known as DBF method it is design by feature, (ii) interactive design features recognition IFR, (iii) automatic design features recognition AFR. In case of the DBF method design geometrical shape is created with design features. There are two basic approaches for design modelling in DBF method it is classic in which a part design is modelled from beginning to end with application design features previously stored in a design features data base and hybrid where part is partially created with standard predefined CAD system tools and the rest with suitable design features. Automatic feature recognition consist in an autonomic searching of a product model represented with a specific design representation method in order to find those model features which might be potentially recognized as design features, manufacturing features, etc. This approach needs the searching algorithm to be prepared. The searching algorithm should allow carrying on the whole recognition process without a user supervision. Currently there are lots of AFR methods. These methods need the product model to be represented with B-Rep representation most often, CSG rarely, wireframe very rarely. In the IFR method potential features are being recognized by a user. This process is most often realized by a user who points out those surfaces which seem to belong to a

  7. A flexible layout design method for passive micromixers.

    PubMed

    Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui

    2012-10-01

    This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer. PMID:22736305

  8. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  9. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  10. Analytical techniques for instrument design - matrix methods

    SciTech Connect

    Robinson, R.A.

    1997-09-01

    We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.

  11. Design Method for Single-Blade Centrifugal Pump Impeller

    NASA Astrophysics Data System (ADS)

    Nishi, Yasuyuki; Fujiwara, Ryota; Fukutomi, Junichiro

    The sewage pumps are demanded a high pump efficiency and a performance in passing foreign bodies. Therefore, the impeller used by these usages requires the large passed particle size (minimum particle size in the pump). However, because conventional design method of pump impeller results in small impeller exit width, it is difficult to be applied to the design of single-blade centrifugal pump impeller which is used as a sewage pump. This paper proposes a design method for single-blade centrifugal pump impeller. As a result, the head curve of the impeller designed by the proposed design method satisfied design specifications, and pump efficiency was over 62% more than conventional single-blade centrifugal pump impeller. By comparing design values with CFD analysis values, the suction velocity ratio of the design parameter agreed well with each other, but the relative velocity ratio did not agree due to the influence of the backflow of the impeller entrance.

  12. Methods for very high temperature design

    SciTech Connect

    Blass, J.J.; Corum, J.M.; Chang, S.J.

    1989-01-01

    Design rules and procedures for high-temperature, gas-cooled reactor components are being formulated as an ASME Boiler and Pressure Vessel Code Case. A draft of the Case, patterned after Code Case N-47, and limited to Inconel 617 and temperatures of 982/degree/C (1800/degree/F) or less, will be completed in 1989 for consideration by relevant Code committees. The purpose of this paper is to provide a synopsis of the significant differences between the draft Case and N-47, and to provide more complete accounts of the development of allowable stress and stress rupture values and the development of isochronous stress vs strain curves, in both of which Oak Ridge National Laboratory (ORNL) played a principal role. The isochronous curves, which represent average behavior for many heats of Inconel 617, were based in part on a unified constitutive model developed at ORNL. Details are also provided of this model of inelastic deformation behavior, which does not distinguish between rate-dependent plasticity and time-dependent creep, along with comparisons between calculated and observed results of tests conducted on a typical heat of Inconel 617 by the General Electric Company for the Department of Energy. 4 refs., 15 figs., 1 tab.

  13. Analytical techniques for instrument design -- Matrix methods

    SciTech Connect

    Robinson, R.A.

    1997-12-31

    The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.

  14. Perspectives toward the stereotype production method for public symbol design: a case study of novice designers.

    PubMed

    Ng, Annie W Y; Siu, Kin Wai Michael; Chan, Chetwyn C H

    2013-01-01

    This study investigated the practices and attitudes of novice designers toward user involvement in public symbol design at the conceptual design stage, i.e. the stereotype production method. Differences between male and female novice designers were examined. Forty-eight novice designers (24 male, 24 female) were asked to design public symbol referents based on suggestions made by a group of users in a previous study and provide feedback with regard to the design process. The novice designers were receptive to the adoption of user suggestions in the conception of the design, but tended to modify the pictorial representations generated by the users to varying extents. It is also significant that the male and female novice designers appeared to emphasize different aspects of user suggestions, and the female novice designers were more positive toward these suggestions than their male counterparts. The findings should aid the optimization of the stereotype production method for user-involved symbol design. PMID:22632980

  15. HEALTHY study rationale, design and methods

    PubMed Central

    2009-01-01

    The HEALTHY primary prevention trial was designed and implemented in response to the growing numbers of children and adolescents being diagnosed with type 2 diabetes. The objective was to moderate risk factors for type 2 diabetes. Modifiable risk factors measured were indicators of adiposity and glycemic dysregulation: body mass index ≥85th percentile, fasting glucose ≥5.55 mmol l-1 (100 mg per 100 ml) and fasting insulin ≥180 pmol l-1 (30 μU ml-1). A series of pilot studies established the feasibility of performing data collection procedures and tested the development of an intervention consisting of four integrated components: (1) changes in the quantity and nutritional quality of food and beverage offerings throughout the total school food environment; (2) physical education class lesson plans and accompanying equipment to increase both participation and number of minutes spent in moderate-to-vigorous physical activity; (3) brief classroom activities and family outreach vehicles to increase knowledge, enhance decision-making skills and support and reinforce youth in accomplishing goals; and (4) communications and social marketing strategies to enhance and promote changes through messages, images, events and activities. Expert study staff provided training, assistance, materials and guidance for school faculty and staff to implement the intervention components. A cohort of students were enrolled in sixth grade and followed to end of eighth grade. They attended a health screening data collection at baseline and end of study that involved measurement of height, weight, blood pressure, waist circumference and a fasting blood draw. Height and weight were also collected at the end of the seventh grade. The study was conducted in 42 middle schools, six at each of seven locations across the country, with 21 schools randomized to receive the intervention and 21 to act as controls (data collection activities only). Middle school was the unit of sample size and

  16. Experimental design for improved ceramic processing, emphasizing the Taguchi Method

    SciTech Connect

    Weiser, M.W. . Mechanical Engineering Dept.); Fong, K.B. )

    1993-12-01

    Ceramic processing often requires substantial experimentation to produce acceptable product quality and performance. This is a consequence of ceramic processes depending upon a multitude of factors, some of which can be controlled and others that are beyond the control of the manufacturer. Statistical design of experiments is a procedure that allows quick, economical, and accurate evaluation of processes and products that depend upon several variables. Designed experiments are sets of tests in which the variables are adjusted methodically. A well-designed experiment yields unambiguous results at minimal cost. A poorly designed experiment may reveal little information of value even with complex analysis, wasting valuable time and resources. This article will review the most common experimental designs. This will include both nonstatistical designs and the much more powerful statistical experimental designs. The Taguchi Method developed by Grenichi Taguchi will be discussed in some detail. The Taguchi method, based upon fractional factorial experiments, is a powerful tool for optimizing product and process performance.

  17. A new interval optimization method considering tolerance design

    NASA Astrophysics Data System (ADS)

    Jiang, C.; Xie, H. C.; Zhang, Z. G.; Han, X.

    2015-12-01

    This study considers the design variable uncertainty in the actual manufacturing process for a product or structure and proposes a new interval optimization method based on tolerance design, which can provide not only an optimal design but also the allowable maximal manufacturing errors that the design can bear. The design variables' manufacturing errors are depicted using the interval method, and an interval optimization model for the structure is constructed. A dimensionless design tolerance index is defined to describe the overall uncertainty of all design variables, and by combining the nominal objective function, a deterministic two-objective optimization model is built. The possibility degree of interval is used to represent the reliability of the constraints under uncertainty, through which the model is transformed to a deterministic optimization problem. Three numerical examples are investigated to verify the effectiveness of the present method.

  18. Equipartition Gamma-Ray Blazars and the Location of the Gamma-Ray Emission Site in 3C 279

    NASA Astrophysics Data System (ADS)

    Dermer, Charles D.; Cerruti, Matteo; Lott, Benoit; Boisson, Catherine; Zech, Andreas

    2014-02-01

    Blazar spectral models generally have numerous unconstrained parameters, leading to ambiguous values for physical properties like Doppler factor δD or fluid magnetic field B'. To help remedy this problem, a few modifications of the standard leptonic blazar jet scenario are considered. First, a log-parabola function for the electron distribution is used. Second, analytic expressions relating energy loss and kinematics to blazar luminosity and variability, written in terms of equipartition parameters, imply δD, B', and the peak electron Lorentz factor \\gamma _{pk}^\\prime. The external radiation field in a blazar is approximated by Lyα radiation from the broad-line region (BLR) and ≈0.1 eV infrared radiation from a dusty torus. When used to model 3C 279 spectral energy distributions from 2008 and 2009 reported by Hayashida et al., we derive δD ~ 20-30, B' ~ few G, and total (IR + BLR) external radiation field energy densities u ~ 10-2-10-3 erg cm-3, implying an origin of the γ-ray emission site in 3C 279 at the outer edges of the BLR. This is consistent with the γ-ray emission site being located at a distance R <~ Γ2 ct var ~ 0.1(Γ/30)2(t var/104 s) pc from the black hole powering 3C 279's jets, where t var is the variability timescale of the radiation in the source frame, and at farther distances for narrow-jet and magnetic-reconnection models. Excess >~ 5 GeV γ-ray emission observed with Fermi LAT from 3C 279 challenges the model, opening the possibility of a second leptonic component or a hadronic origin of the emission. For low hadronic content, absolute jet powers of ≈10% of the Eddington luminosity are calculated.

  19. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  20. What Can Mixed Methods Designs Offer Professional Development Program Evaluators?

    ERIC Educational Resources Information Center

    Giordano, Victoria; Nevin, Ann

    2007-01-01

    In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…

  1. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  2. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  3. Expanding color design methods for architecture and allied disciplines

    NASA Astrophysics Data System (ADS)

    Linton, Harold E.

    2002-06-01

    The color design processes of visual artists, architects, designers, and theoreticians included in this presentation reflect the practical role of color in architecture. What the color design professional brings to the architectural design team is an expertise and rich sensibility made up of a broad awareness and a finely tuned visual perception. This includes a knowledge of design and its history, expertise with industrial color materials and their methods of application, an awareness of design context and cultural identity, a background in physiology and psychology as it relates to human welfare, and an ability to problem-solve and respond creatively to design concepts with innovative ideas. The broadening of the definition of the colorists's role in architectural design provides architects, artists and designers with significant opportunities for continued professional and educational development.

  4. Design methods for fault-tolerant finite state machines

    NASA Technical Reports Server (NTRS)

    Niranjan, Shailesh; Frenzel, James F.

    1993-01-01

    VLSI electronic circuits are increasingly being used in space-borne applications where high levels of radiation may induce faults, known as single event upsets. In this paper we review the classical methods of designing fault tolerant digital systems, with an emphasis on those methods which are particularly suitable for VLSI-implementation of finite state machines. Four methods are presented and will be compared in terms of design complexity, circuit size, and estimated circuit delay.

  5. Aerodynamic design optimization by using a continuous adjoint method

    NASA Astrophysics Data System (ADS)

    Luo, JiaQi; Xiong, JunTao; Liu, Feng

    2014-07-01

    This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.

  6. Tabu search method with random moves for globally optimal design

    NASA Astrophysics Data System (ADS)

    Hu, Nanfang

    1992-09-01

    Optimum engineering design problems are usually formulated as non-convex optimization problems of continuous variables. Because of the absence of convexity structure, they can have multiple minima, and global optimization becomes difficult. Traditional methods of optimization, such as penalty methods, can often be trapped at a local optimum. The tabu search method with random moves to solve approximately these problems is introduced. Its reliability and efficiency are examined with the help of standard test functions. By the analysis of the implementations, it is seen that this method is easy to use, and no derivative information is necessary. It outperforms the random search method and composite genetic algorithm. In particular, it is applied to minimum weight design examples of a three-bar truss, coil springs, a Z-section and a channel section. For the channel section, the optimal design using the tabu search method with random moves saved 26.14 percent over the weight of the SUMT method.

  7. An inverse method with regularity condition for transonic airfoil design

    NASA Technical Reports Server (NTRS)

    Zhu, Ziqiang; Xia, Zhixun; Wu, Liyi

    1991-01-01

    It is known from Lighthill's exact solution of the incompressible inverse problem that in the inverse design problem, the surface pressure distribution and the free stream speed cannot both be prescribed independently. This implies the existence of a constraint on the prescribed pressure distribution. The same constraint exists at compressible speeds. Presented here is an inverse design method for transonic airfoils. In this method, the target pressure distribution contains a free parameter that is adjusted during the computation to satisfy the regularity condition. Some design results are presented in order to demonstrate the capabilities of the method.

  8. An artificial viscosity method for the design of supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Mcfadden, G. B.

    1979-01-01

    A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.

  9. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  10. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... made under the provisions of 40 CFR part 53, as ] amended on August 31, 2011 (76 FR 54326-54341). The... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method...

  11. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... 53, as amended on August 31, 2011 (76 FR 54326-54341). The new equivalent methods are automated... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new...

  12. Equipartition gamma-ray blazars and the location of the gamma-ray emission site in 3C 279

    SciTech Connect

    Dermer, Charles D.; Cerruti, Matteo; Lott, Benoit

    2014-02-20

    Blazar spectral models generally have numerous unconstrained parameters, leading to ambiguous values for physical properties like Doppler factor δ{sub D} or fluid magnetic field B'. To help remedy this problem, a few modifications of the standard leptonic blazar jet scenario are considered. First, a log-parabola function for the electron distribution is used. Second, analytic expressions relating energy loss and kinematics to blazar luminosity and variability, written in terms of equipartition parameters, imply δ{sub D}, B', and the peak electron Lorentz factor γ{sub pk}{sup ′}. The external radiation field in a blazar is approximated by Lyα radiation from the broad-line region (BLR) and ≈0.1 eV infrared radiation from a dusty torus. When used to model 3C 279 spectral energy distributions from 2008 and 2009 reported by Hayashida et al., we derive δ{sub D} ∼ 20-30, B' ∼ few G, and total (IR + BLR) external radiation field energy densities u ∼ 10{sup –2}-10{sup –3} erg cm{sup –3}, implying an origin of the γ-ray emission site in 3C 279 at the outer edges of the BLR. This is consistent with the γ-ray emission site being located at a distance R ≲ Γ{sup 2} ct {sub var} ∼ 0.1(Γ/30){sup 2}(t {sub var}/10{sup 4} s) pc from the black hole powering 3C 279's jets, where t {sub var} is the variability timescale of the radiation in the source frame, and at farther distances for narrow-jet and magnetic-reconnection models. Excess ≳ 5 GeV γ-ray emission observed with Fermi LAT from 3C 279 challenges the model, opening the possibility of a second leptonic component or a hadronic origin of the emission. For low hadronic content, absolute jet powers of ≈10% of the Eddington luminosity are calculated.

  13. Investigating the Use of Design Methods by Capstone Design Students at Clemson University

    ERIC Educational Resources Information Center

    Miller, W. Stuart; Summers, Joshua D.

    2013-01-01

    The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…

  14. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  15. New directions for Artificial Intelligence (AI) methods in optimum design

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1989-01-01

    Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.

  16. Approximate method of designing a two-element airfoil

    NASA Astrophysics Data System (ADS)

    Abzalilov, D. F.; Mardanov, R. F.

    2011-09-01

    An approximate method is proposed for designing a two-element airfoil. The method is based on reducing an inverse boundary-value problem in a doubly connected domain to a problem in a singly connected domain located on a multisheet Riemann surface. The essence of the method is replacement of channels between the airfoil elements by channels of flow suction and blowing. The shape of these channels asymptotically tends to the annular shape of channels passing to infinity on the second sheet of the Riemann surface. The proposed method can be extended to designing multielement airfoils.

  17. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  18. Design method for four-reflector type beam waveguide systems

    NASA Technical Reports Server (NTRS)

    Betsudan, S.; Katagi, T.; Urasaki, S.

    1986-01-01

    Discussed is a method for the design of four reflector type beam waveguide feed systems, comprised of a conical horn and 4 focused reflectors, which are used widely as the primary reflector systems for communications satellite Earth station antennas. The design parameters for these systems are clarified, the relations between each parameter are brought out based on the beam mode development, and the independent design parameters are specified. The characteristics of these systems, namely spillover loss, crosspolarization components, and frequency characteristics, and their relation to the design parameters, are also shown. It is also indicated that design parameters which decide the dimensions of the conical horn or the shape of the focused reflectors can be unerringly established once the design standard for the system has been selected as either: (1) minimizing the crosspolarization component by keeping the spillover loss to within acceptable limits, or (2) minimizing the spillover loss by maintaining the crossover components below an acceptable level and the independent design parameters, such as the respective sizes of the focused reflectors and the distances between the focussed reflectors, etc., have been established according to mechanical restrictions. A sample design is also shown. In addition to being able to clarify the effects of each of the design parameters on the system and improving insight into these systems, the efficiency of these systems will also be increased with this design method.

  19. A multidisciplinary optimization method for designing boundary layer ingesting inlets

    NASA Astrophysics Data System (ADS)

    Rodriguez, David Leonard

    2001-07-01

    The Blended-Wing-Body is a conceptual aircraft design with rear-mounted, over-wing engines. Two types of engine installations have been considered for this aircraft. One installation is quite conventional with podded engines mounted on pylons. The other installation has partially buried engines with boundary layer ingesting inlets. Although ingesting the low-momentum flow in a boundary layer can improve propulsive efficiency, poor inlet performance can offset and even overwhelm this potential advantage. For both designs, the tight coupling between the aircraft aerodynamics and the propulsion system poses a difficult design integration problem. This dissertation presents a design method that solves the problem using multidisciplinary optimization. A Navier-Stokes flow solver, an engine analysis method, and a nonlinear optimizer are combined into a design tool that correctly addresses the tight coupling of the problem. The method is first applied to a model 2D problem to expedite development and thoroughly test the scheme. The low computational cost of the 2D method allows for several inlet installations to be optimized and analyzed. The method is then upgraded by using a validated 3D Navier-Stokes solver. The two candidate engine installations are analyzed and optimized using this inlet design method. The method is shown to be quite effective at integrating the propulsion and aerodynamic systems of the Blend-Wing-Body for both engine installations by improving overall performance and satisfying any specified design constraints. By comparing the two optimized designs, the potential advantages of ingesting boundary layer flow for this aircraft are demonstrated.

  20. Epidemiological designs for vaccine safety assessment: methods and pitfalls.

    PubMed

    Andrews, Nick

    2012-09-01

    Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues. PMID:21985898

  1. Design of diffractive optical surfaces within the nonimaging SMS design method

    NASA Astrophysics Data System (ADS)

    Mendes-Lopes, João.; Benítez, Pablo; Miñano, Juan C.

    2015-09-01

    The Simultaneous Multiple Surface (SMS) method was initially developed as a design method in Nonimaging Optics and later, the method was extended for designing Imaging Optics. We show an extension of the SMS method to diffractive surfaces. Using this method, diffractive kinoform surfaces are calculated simultaneously and through a direct method, i. e. it is not based in multi-parametric optimization techniques. Using the phase-shift properties of diffractive surfaces as an extra degree of freedom, only N/2 surfaces are needed to perfectly couple N one parameter wavefronts. Wavefronts of different wavelengths can also be coupled, hence chromatic aberration can be corrected in SMS-based systems. This method can be used by combining and calculating simultaneously both reflective, refractive and diffractive surfaces, through direct calculation of phase and refractive/reflective profiles. Representative diffractive systems designed by the SMS method are presented.

  2. A Bright Future for Evolutionary Methods in Drug Design.

    PubMed

    Le, Tu C; Winkler, David A

    2015-08-01

    Most medicinal chemists understand that chemical space is extremely large, essentially infinite. Although high-throughput experimental methods allow exploration of drug-like space more rapidly, they are still insufficient to fully exploit the opportunities that such large chemical space offers. Evolutionary methods can synergistically blend automated synthesis and characterization methods with computational design to identify promising regions of chemical space more efficiently. We describe how evolutionary methods are implemented, and provide examples of published drug development research in which these methods have generated molecules with increased efficacy. We anticipate that evolutionary methods will play an important role in future drug discovery. PMID:26059362

  3. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  4. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  5. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  6. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  7. A computational design method for transonic turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Dulikravich, D. S.

    1982-01-01

    This paper describes a systematical computational procedure to find configuration changes necessary to modify the resulting flow past turbomachinery cascades, channels and nozzles, to be shock-free at prescribed transonic operating conditions. The method is based on a finite area transonic analysis technique and the fictitious gas approach. This design scheme has two major areas of application. First, it can be used for design of supercritical cascades, with applications mainly in compressor blade design. Second, it provides subsonic inlet shapes including sonic surfaces with suitable initial data for the design of supersonic (accelerated) exits, like nozzles and turbine cascade shapes. This fast, accurate and economical method with a proven potential for applications to three-dimensional flows is illustrated by some design examples.

  8. Risk-based methods applicable to ranking conceptual designs

    SciTech Connect

    Breeding, R.J.; Ortiz, K.; Ringland, J.T.; Lim, J.J.

    1993-11-01

    In Ginichi Taguchi`s latest book on quality engineering, an emphasis is placed on robust design processes in which quality engineering techniques are brought ``upstream,`` that is, they are utilized as early as possible, preferably in the conceptual design stage. This approach was used in a study of possible future safety system designs for weapons. As an experiment, a method was developed for using probabilistic risk analysis (PRA) techniques to rank conceptual designs for performance against a safety metric for ultimate incorporation into a Pugh matrix evaluation. This represents a high-level UW application of PRA methods to weapons. As with most conceptual designs, details of the implementation were not yet developed; many of the components had never been built, let alone tested. Therefore, our application of risk assessment methods was forced to be at such a high level that the entire evaluation could be performed on a spreadsheet. Nonetheless, the method produced numerical estimates of safety in a manner that was consistent, reproducible, and scrutable. The results enabled us to rank designs to identify areas where returns on research efforts would be the greatest. The numerical estimates were calibrated against what is achievable by current weapon safety systems. The use of expert judgement is inescapable, but these judgements are explicit and the method is easily implemented on an spreadsheet computer program.

  9. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  10. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  11. FRP bolted flanged connections -- Modern design and fabrication methods

    SciTech Connect

    Blach, A.E.; Sun, L.

    1995-11-01

    Bolted flanged connections for fiber reinforced plastic (FRP) pipes and pressure vessels are of great importance for any user of FRP material in fluid containment applications. At present, no dimensional standards or design rules exist for FRP flanges. Most often, flanges are fabricated to dimensional standards for metallic flanges without questioning their applicability to FRP materials. This paper discusses simplified and exact design methods for composite flanges, based on isotropic material design and on laminate theory design. Both, exact and simplified methods are included. Results of various design methods are then compared with experimental results from strain gage measurements on test pressure vessels. Methods of flange fabrication such as hand lay-up, injection molding, filament winding, and others, are discussed for their relative merits in pressure vessel and piping applications. Both, integral and bonded flanges are covered as applicable to the various methods of fabrication, also the economic implications of these methods. Also treated are the problems of gasket selection, bolting and overbolting, gasket stresses, and leakage of flanged connections.

  12. Optimal Input Signal Design for Data-Centric Estimation Methods

    PubMed Central

    Deshpande, Sunil; Rivera, Daniel E.

    2013-01-01

    Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042

  13. Optimal Input Signal Design for Data-Centric Estimation Methods.

    PubMed

    Deshpande, Sunil; Rivera, Daniel E

    2013-01-01

    Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042

  14. Test methods and design allowables for fibrous composites. Volume 2

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C. (Editor)

    1989-01-01

    Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.

  15. Tradeoff methods in multiobjective insensitive design of airplane control systems

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Giesy, D. P.

    1984-01-01

    The latest results of an ongoing study of computer-aided design of airplane control systems are given. Constrained minimization algorithms are used, with the design objectives in the constraint vector. The concept of Pareto optimiality is briefly reviewed. It is shown how an experienced designer can use it to find designs which are well-balanced in all objectives. Then the problem of finding designs which are insensitive to uncertainty in system parameters are discussed, introducing a probabilistic vector definition of sensitivity which is consistent with the deterministic Pareto optimal problem. Insensitivity is important in any practical design, but it is particularly important in the design of feedback control systems, since it is considered to be the most important distinctive property of feedback control. Methods of tradeoff between deterministic and stochastic-insensitive (SI) design are described, and tradeoff design results are presented for the example of the a Shuttle lateral stability augmentation system. This example is used because careful studies have been made of the uncertainty in Shuttle aerodynamics. Finally, since accurate statistics of uncertain parameters are usually not available, the effects of crude statistical models on SI designs are examined.

  16. A new method of VLSI conform design for MOS cells

    NASA Astrophysics Data System (ADS)

    Schmidt, K. H.; Wach, W.; Mueller-Glaser, K. D.

    An automated method for the design of specialized SSI/LSI-level MOS cells suitable for incorporation in VLSI chips is described. The method uses the symbolic-layout features of the CABBAGE computer program (Hsueh, 1979; De Man et al., 1982), but restricted by a fixed grid system to facilitate compaction procedures. The techniques used are shown to significantly speed the processes of electrical design, layout, design verification, and description for subsequent CAD/CAM application. In the example presented, a 211-transistor, parallel-load, synchronous 4-bit up/down binary counter cell was designed in 9 days, as compared to 30 days for a manually-optimized-layout version and 3 days for a larger, less efficient cell designed by a programmable logic array; the cell areas were 0.36, 0.21, and 0.79 sq mm, respectively. The primary advantage of the method is seen in the extreme ease with which the cell design can be adapted to new parameters or design rules imposed by improvements in technology.

  17. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  18. Method for Enzyme Design with Genetically Encoded Unnatural Amino Acids.

    PubMed

    Hu, C; Wang, J

    2016-01-01

    We describe the methodologies for the design of artificial enzymes with genetically encoded unnatural amino acids. Genetically encoded unnatural amino acids offer great promise for constructing artificial enzymes with novel activities. In our studies, the designs of artificial enzyme were divided into two steps. First, we considered the unnatural amino acids and the protein scaffold separately. The scaffold is designed by traditional protein design methods. The unnatural amino acids are inspired by natural structure and organic chemistry methods, and synthesized by either organic chemistry methods or enzymatic conversion. With the increasing number of published unnatural amino acids with various functions, we described an unnatural amino acids toolkit containing metal chelators, redox mediators, and click chemistry reagents. These efforts enable a researcher to search the toolkit for appropriate unnatural amino acids for the study, rather than design and synthesize the unnatural amino acids from the beginning. After the first step, the model enzyme was optimized by computational methods and directed evolution. Lastly, we describe a general method for evolving aminoacyl-tRNA synthetase and expressing unnatural amino acids incorporated into a protein. PMID:27586330

  19. Developing Conceptual Hypersonic Airbreathing Engines Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Ferlemann, Shelly M.; Robinson, Jeffrey S.; Martin, John G.; Leonard, Charles P.; Taylor, Lawrence W.; Kamhawi, Hilmi

    2000-01-01

    Designing a hypersonic vehicle is a complicated process due to the multi-disciplinary synergy that is required. The greatest challenge involves propulsion-airframe integration. In the past, a two-dimensional flowpath was generated based on the engine performance required for a proposed mission. A three-dimensional CAD geometry was produced from the two-dimensional flowpath for aerodynamic analysis, structural design, and packaging. The aerodynamics, engine performance, and mass properties arc inputs to the vehicle performance tool to determine if the mission goals were met. If the mission goals were not met, then a flowpath and vehicle redesign would begin. This design process might have to be performed several times to produce a "closed" vehicle. This paper will describe an attempt to design a hypersonic cruise vehicle propulsion flowpath using a Design of' Experiments method to reduce the resources necessary to produce a conceptual design with fewer iterations of the design cycle. These methods also allow for more flexible mission analysis and incorporation of additional design constraints at any point. A design system was developed using an object-based software package that would quickly generate each flowpath in the study given the values of the geometric independent variables. These flowpath geometries were put into a hypersonic propulsion code and the engine performance was generated. The propulsion results were loaded into statistical software to produce regression equations that were combined with an aerodynamic database to optimize the flowpath at the vehicle performance level. For this example, the design process was executed twice. The first pass was a cursory look at the independent variables selected to determine which variables are the most important and to test all of the inputs to the optimization process. The second cycle is a more in-depth study with more cases and higher order equations representing the design space.

  20. A decentralized linear quadratic control design method for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1990-01-01

    A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass

  1. The conditional risk probability-based seawall height design method

    NASA Astrophysics Data System (ADS)

    Yang, Xing; Hu, Xiaodong; Li, Zhiqing

    2015-11-01

    The determination of the required seawall height is usually based on the combination of wind speed (or wave height) and still water level according to a specified return period, e.g., 50-year return period wind speed and 50-year return period still water level. In reality, the two variables are be partially correlated. This may be lead to over-design (costs) of seawall structures. The above-mentioned return period for the design of a seawall depends on economy, society and natural environment in the region. This means a specified risk level of overtopping or damage of a seawall structure is usually allowed. The aim of this paper is to present a conditional risk probability-based seawall height design method which incorporates the correlation of the two variables. For purposes of demonstration, the wind speeds and water levels collected from Jiangsu of China are analyzed. The results show this method can improve seawall height design accuracy.

  2. Mixed methods research design for pragmatic psychoanalytic studies.

    PubMed

    Tillman, Jane G; Clemence, A Jill; Stevens, Jennifer L

    2011-10-01

    Calls for more rigorous psychoanalytic studies have increased over the past decade. The field has been divided by those who assert that psychoanalysis is properly a hermeneutic endeavor and those who see it as a science. A comparable debate is found in research methodology, where qualitative and quantitative methods have often been seen as occupying orthogonal positions. Recently, Mixed Methods Research (MMR) has emerged as a viable "third community" of research, pursuing a pragmatic approach to research endeavors through integrating qualitative and quantitative procedures in a single study design. Mixed Methods Research designs and the terminology associated with this emerging approach are explained, after which the methodology is explored as a potential integrative approach to a psychoanalytic human science. Both qualitative and quantitative research methods are reviewed, as well as how they may be used in Mixed Methods Research to study complex human phenomena. PMID:21880844

  3. Rotordynamics and Design Methods of an Oil-Free Turbocharger

    NASA Technical Reports Server (NTRS)

    Howard, Samuel A.

    1999-01-01

    The feasibility of supporting a turbocharger rotor on air foil bearings is investigated based upon predicted rotordynamic stability, load accommodations, and stress considerations. It is demonstrated that foil bearings offer a plausible replacement for oil-lubricated bearings in diesel truck turbochargers. Also, two different rotor configurations are analyzed and the design is chosen which best optimizes the desired performance characteristics. The method of designing machinery for foil bearing use and the assumptions made are discussed.

  4. Design of large Francis turbine using optimal methods

    NASA Astrophysics Data System (ADS)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  5. A finite-difference method for transonic airfoil design.

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Klineberg, J. M.

    1972-01-01

    This paper describes an inverse method for designing transonic airfoil sections or for modifying existing profiles. Mixed finite-difference procedures are applied to the equations of transonic small disturbance theory to determine the airfoil shape corresponding to a given surface pressure distribution. The equations are solved for the velocity components in the physical domain and flows with embedded shock waves can be calculated. To facilitate airfoil design, the method allows alternating between inverse and direct calculations to obtain a profile shape that satisfies given geometric constraints. Examples are shown of the application of the technique to improve the performance of several lifting airfoil sections. The extension of the method to three dimensions for designing supercritical wings is also indicated.

  6. Improved method for transonic airfoil design-by-optimization

    NASA Technical Reports Server (NTRS)

    Kennelly, R. A., Jr.

    1983-01-01

    An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization problem. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.

  7. Improved method for transonic airfoil design-by-optimization

    NASA Technical Reports Server (NTRS)

    Kennelly, R. A., Jr.

    1983-01-01

    An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.

  8. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  9. A method for the probabilistic design assessment of composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    A formal procedure for the probabilistic design assessment of a composite structure is described. The uncertainties in all aspects of a composite structure (constituent material properties, fabrication variables, structural geometry, service environments, etc.), which result in the uncertain behavior in the composite structural responses, are included in the assessment. The probabilistic assessment consists of design criteria, modeling of composite structures and uncertainties, simulation methods, and the decision making process. A sample case is presented to illustrate the formal procedure and to demonstrate that composite structural designs can be probabilistically assessed with accuracy and efficiency.

  10. Structure design: an artificial intelligence-based method for the design of molecules under geometrical constraints.

    PubMed

    Cohen, A A; Shatzmiller, S E

    1993-09-01

    This study presents an algorithm that implements artificial-intelligence techniques for automated, and site-directed drug design. The aim of the method is to link two or more predetermined functional groups into a sensible molecular structure. The proposed designing process mimics the classical manual design method, in which the drug designer sits in front of the computer screen and with the aid of computer graphics attempts to design the new drug. Therefore, the key principle of the algorithm is the parameterization of some criteria that affect the decision-making process carried out by the drug designer. This parameterization is based on the generation of weighting factors that reflect the knowledge and knowledge-based intuition of the drug designer, and thus add further rationalization to the drug design process. The proposed algorithm has been shown to yield a large variety of different structures, of which the drug designer may choose the most sensible. Performance tests indicate that with the proper set of parameters, the method generates a new structure within a short time. PMID:8110662

  11. Inverse design of airfoils using a flexible membrane method

    NASA Astrophysics Data System (ADS)

    Thinsurat, Kamon

    The Modified Garabedian Mc-Fadden (MGM) method is used to inversely design airfoils. The Finite Difference Method (FDM) for Non-Uniform Grids was developed to discretize the MGM equation for numerical solving. The Finite Difference Method (FDM) for Non-Uniform Grids has the advantage of being used flexibly with an unstructured grids airfoil. The commercial software FLUENT is being used as the flow solver. Several conditions are set in FLUENT such as subsonic inviscid flow, subsonic viscous flow, transonic inviscid flow, and transonic viscous flow to test the inverse design code for each condition. A moving grid program is used to create a mesh for new airfoils prior to importing meshes into FLUENT for the analysis of flows. For validation, an iterative process is used so the Cp distribution of the initial airfoil, the NACA0011, achieves the Cp distribution of the target airfoil, the NACA2315, for the subsonic inviscid case at M=0.2. Three other cases were carried out to validate the code. After the code validations, the inverse design method was used to design a shock free airfoil in the transonic condition and to design a separation free airfoil at a high angle of attack in the subsonic condition.

  12. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).

  13. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and

  14. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  15. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  16. Function combined method for design innovation of children's bike

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan

    2013-03-01

    As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.

  17. Novel kind of DSP design method based on IP core

    NASA Astrophysics Data System (ADS)

    Yu, Qiaoyan; Liu, Peng; Wang, Weidong; Hong, Xiang; Chen, Jicheng; Yuan, Jianzhong; Chen, Keming

    2004-04-01

    With the pressure from the design productivity and various special applications, original design method for DSP can no longer keep up with the required speed. A novel design method is needed urgently. Intellectual Property (IP) reusing is a tendency for DSP design, but simple plug-and-play IP cores approaches almost never work. Therefore, appropriate control strategies are needed to connect all the IP cores used and coordinate the whole DSP. This paper presents a new DSP design procedure, which refers to System-on-a-chip, and later introduces a novel control strategy named DWC to implement the DSP based on IP cores. The most important part of this novel control strategy, pipeline control unit (PCU), is given in detail. Because a great number of data hazards occur in most computation-intensive scientific application, a new effective algorithm of checking data hazards is employed in PCU. Following this strategy, the design of a general or special purposed DSP can be finished in shorter time, and the DSP has a potency to improve performance with little modification on basic function units. This DWC strategy has been implement in a 16-bit fixed-pointed DSP successfully.

  18. System Synthesis in Preliminary Aircraft Design Using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and early preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically Design of Experiments (DOE) and Response Surface Methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an Overall Evaluation Criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in an innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting in solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a High Speed Civil Transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabilistic designs (and eventually robust ones).

  19. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  20. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  1. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  2. New Methods and Transducer Designs for Ultrasonic Diagnostics and Therapy

    NASA Astrophysics Data System (ADS)

    Rybyanets, A. N.; Naumenko, A. A.; Sapozhnikov, O. A.; Khokhlova, V. A.

    Recent advances in the field of physical acoustics, imaging technologies, piezoelectric materials, and ultrasonic transducer design have led to emerging of novel methods and apparatus for ultrasonic diagnostics, therapy and body aesthetics. The paper presents the results on development and experimental study of different high intensity focused ultrasound (HIFU) transducers. Technological peculiarities of the HIFU transducer design as well as theoretical and numerical models of such transducers and the corresponding HIFU fields are discussed. Several HIFU transducers of different design have been fabricated using different advanced piezoelectric materials. Acoustic field measurements for those transducers have been performed using a calibrated fiber optic hydrophone and an ultrasonic measurement system (UMS). The results of ex vivo experiments with different tissues as well as in vivo experiments with blood vessels are presented that prove the efficacy, safety and selectivity of the developed HIFU transducers and methods.

  3. Impact design methods for ceramic components in gas turbine engines

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1991-01-01

    Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.

  4. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  5. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Designation of noise description...

  6. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Designation of noise description...

  7. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Designation of noise description...

  8. Analytical methods of electrode design for a relativistic electron gun

    SciTech Connect

    Caporaso, G.J.; Cole, A.G.; Boyd, J.K.

    1985-05-09

    The standard paraxial ray equation method for the design of electrodes for an electrostatically focused gun is extended to include relativistic effects and the effects of the beam's azimuthal magnetic field. Solutions for parallel and converging beams are obtained and the predicted currents are compared against those measured on the High Brightness Test Stand. 4 refs., 2 figs.

  9. Designs and Methods in School Improvement Research: A Systematic Review

    ERIC Educational Resources Information Center

    Feldhoff, Tobias; Radisch, Falk; Bischof, Linda Marie

    2016-01-01

    Purpose: The purpose of this paper is to focus on challenges faced by longitudinal quantitative analyses of school improvement processes and offers a systematic literature review of current papers that use longitudinal analyses. In this context, the authors assessed designs and methods that are used to analyze the relation between school…

  10. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  11. Polypharmacology: in silico methods of ligand design and development.

    PubMed

    McKie, Samuel A

    2016-04-01

    How to design a ligand to bind multiple targets, rather than to a single target, is the focus of this review. Rational polypharmacology draws on knowledge that is both broad ranging and hierarchical. Computer-aided multitarget ligand design methods are described according to their nested knowledge level. Ligand-only and then receptor-ligand strategies are first described; followed by the metabolic network viewpoint. Subsequently strategies that view infectious diseases as multigenomic targets are discussed, and finally the disease level interpretation of medicinal therapy is considered. As yet there is no consensus on how best to proceed in designing a multitarget ligand. The current methodologies are bought together in an attempt to give a practical overview of how polypharmacology design might be best initiated. PMID:27105127

  12. Guidance for using mixed methods design in nursing practice research.

    PubMed

    Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia

    2016-08-01

    The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research. PMID:27397810

  13. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  14. A robust inverse inviscid method for airfoil design

    NASA Astrophysics Data System (ADS)

    Chaviaropoulos, P.; Dedoussis, V.; Papailiou, K. D.

    An irrotational inviscid compressible inverse design method for two-dimensional airfoil profiles is described. The method is based on the potential streamfunction formulation, where the physical space on which the boundaries of the airfoil are sought, is mapped onto the (phi, psi) space via a body-fitted coordinate transformation. A novel procedure based on differential geometry arguments is employed to derive the governing equations for the inverse problem, by requiring the curvature of the flat 2-D Euclidean space to be zero. An auxiliary coordinate transformation permits the definition of C-type computational grids on the (phi, psi) plane resulting to a more accurate description of the leading edge region. Geometry is determined by integrating Frenet equations along the grid lines. To validate the method inverse calculation results are compared to direct, `reproduction', calculation results. The design procedure of a new airfoil shape is also presented.

  15. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  16. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  17. Non-Contact Electromagnetic Exciter Design with Linear Control Method

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Xiong, Xianzhi; Xu, Hua

    2016-04-01

    A non-contact type force actuator is necessary for studying the dynamic performance of a high-speed spindle system owing to its high-speed operating conditions. A non-contact electromagnetic exciter is designed for identifying the dynamic coefficients of journal bearings in high-speed grinding spindles. A linear force control method is developed based on PID controller. The influence of amplitude and frequency of current, misalignment and rotational speed on magnetic field and excitation force is investigated based on two-dimensional finite element analysis. The electromagnetic excitation force is measured with the auxiliary coils and calibrated by load cells. The design is validated by the experimental results. Theoretical and experimental investigations show that the proposed design can accurately generate linear excitation force with sufficiently large amplitude and higher signal to noise ratio. Moreover, the fluctuations in force amplitude are reduced to a greater extent with the designed linear control method even when the air gap changes due to the rotor vibration at high-speed conditions. Besides, it is possible to apply various types of excitations: constant, synchronous, and non-synchronous excitation forces based on the proposed linear control method. This exciter can be used as linear-force exciting and controlling system for dynamic performance study of different high-speed rotor-bearing systems.

  18. GAMMA-RAY BLAZARS NEAR EQUIPARTITION AND THE ORIGIN OF THE GeV SPECTRAL BREAK IN 3C 454.3

    SciTech Connect

    Cerruti, Matteo; Dermer, Charles D.; Lott, Benoit

    2013-07-01

    Observations performed with the Fermi-LAT telescope have revealed the presence of a spectral break in the GeV spectrum of flat-spectrum radio quasars (FSRQs) and other low- and intermediate-synchrotron peaked blazars. We propose that this feature can be explained by Compton scattering of broad-line region photons by a non-thermal population of electrons described by a log-parabolic function. We consider in particular a scenario in which the energy densities of particles, magnetic field, and soft photons in the emitting region are close to equipartition. We show that this model can satisfactorily account for the overall spectral energy distribution of the FSRQ 3C 454.3, reproducing the GeV spectral cutoff due to Klein-Nishina effects and a curving electron distribution.

  19. Material Design, Selection, and Manufacturing Methods for System Sustainment

    SciTech Connect

    David Sowder, Jim Lula, Curtis Marshall

    2010-02-18

    This paper describes a material selection and validation process proven to be successful for manufacturing high-reliability long-life product. The National Secure Manufacturing Center business unit of the Kansas City Plant (herein called KCP) designs and manufactures complex electrical and mechanical components used in extreme environments. The material manufacturing heritage is founded in the systems design to manufacturing practices that support the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA). Material Engineers at KCP work with the systems designers to recommend materials, develop test methods, perform analytical analysis of test data, define cradle to grave needs, present final selection and fielding. The KCP material engineers typically will maintain cost control by utilizing commercial products when possible, but have the resources and to develop and produce unique formulations as necessary. This approach is currently being used to mature technologies to manufacture materials with improved characteristics using nano-composite filler materials that will enhance system design and production. For some products the engineers plan and carry out science-based life-cycle material surveillance processes. Recent examples of the approach include refurbished manufacturing of the high voltage power supplies for cockpit displays in operational aircraft; dry film lubricant application to improve bearing life for guided munitions gyroscope gimbals, ceramic substrate design for electrical circuit manufacturing, and tailored polymeric materials for various systems. The following examples show evidence of KCP concurrent design-to-manufacturing techniques used to achieve system solutions that satisfy or exceed demanding requirements.

  20. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  1. Parameter constraints in a near-equipartition model with multifrequency NuSTAR, Swift, and Fermi-LAT data from 3C 279

    NASA Astrophysics Data System (ADS)

    Yan, Dahai; Zhang, Li; Zhang, Shuang-Nan

    2015-12-01

    Precise spectra of 3C 279 in the 0.5-70 keV range, obtained during two epochs of Swift and NuSTAR observations, are analysed using a near-equipartition model. We apply a one-zone leptonic model with a three-parameter log-parabola electron energy distribution to fit the Swift and NuSTAR X-ray data, as well as simultaneous optical and Fermi-LAT gamma-ray data. The Markov chain Monte Carlo technique is used to search the high-dimensional parameter space and evaluate the uncertainties on model parameters. We show that the two spectra can be successfully fitted in near-equipartition conditions, defined by the ratio of the energy density of relativistic electrons to magnetic field ζe being close to unity. In both spectra, the observed X-rays are dominated by synchrotron self-Compton photons, and the observed gamma-rays are dominated by Compton scattering of external infrared photons from a surrounding dusty torus. Model parameters are well constrained. From the low state to the high state, both the curvature of the log-parabola width parameter and the synchrotron peak frequency significantly increase. The derived magnetic fields in the two states are nearly identical (˜1 G), but the Doppler factor in the high state is larger than that in the low state (˜28 versus ˜18). We derive that the gamma-ray emission site takes place outside the broad-line region, at ≳0.1 pc from the black hole, but within the dusty torus. Implications for 3C 279 as a source of high-energy cosmic rays are discussed.

  2. Online Guidance Law of Missile Using Multiple Design Point Method

    NASA Astrophysics Data System (ADS)

    Yamaoka, Seiji; Ueno, Seiya

    This paper deals with design procedure of online guidance law for future missiles that are required to have agile maneuverability. For the purpose, the authors propose to mount high power side-thrusters on a missile. The guidance law for such missiles is discussed from a point of view of optimal control theory in this paper. Minimum time problem is solved for the approximated system. It is derived that bang-bang control is optimal input from the necessary conditions of optimal solution. Feedback guidance without iterative calculation is useful for actual systems. Multiple design point method is applied to design feedback gains and feedforward inputs of the guidance law. The numerical results show the good performance of the proposed guidance law.

  3. COMPSIZE - PRELIMINARY DESIGN METHOD FOR FIBER REINFORCED COMPOSITE STRUCTURES

    NASA Technical Reports Server (NTRS)

    Eastlake, C. N.

    1994-01-01

    The Composite Structure Preliminary Sizing program, COMPSIZE, is an analytical tool which structural designers can use when doing approximate stress analysis to select or verify preliminary sizing choices for composite structural members. It is useful in the beginning stages of design concept definition, when it is helpful to have quick and convenient approximate stress analysis tools available so that a wide variety of structural configurations can be sketched out and checked for feasibility. At this stage of the design process the stress/strain analysis does not need to be particularly accurate because any configurations tentatively defined as feasible will later be analyzed in detail by stress analysis specialists. The emphasis is on fast, user-friendly methods so that rough but technically sound evaluation of a broad variety of conceptual designs can be accomplished. Analysis equations used are, in most cases, widely known basic structural analysis methods. All the equations used in this program assume elastic deformation only. The default material selection is intermediate strength graphite/epoxy laid up in a quasi-isotropic laminate. A general flat laminate analysis subroutine is included for analyzing arbitrary laminates. However, COMPSIZE should be sufficient for most users to presume a quasi-isotropic layup and use the familiar basic structural analysis methods for isotropic materials, after estimating an appropriate elastic modulus. Homogeneous materials can be analyzed as simplified cases. The COMPSIZE program is written in IBM BASICA. The program format is interactive. It was designed on an IBM Personal Computer operating under DOS with a central memory requirement of approximately 128K. It has been implemented on an IBM compatible with GW-BASIC under DOS 3.2. COMPSIZE was developed in 1985.

  4. Numerical design method for thermally loaded plate-cylinder intersections

    SciTech Connect

    Baldur, R.; Laberge, C.A.; Lapointe, D. )

    1988-11-01

    This paper is an extension of work on stresses in corner radii described by the authors previously. Whereas the original study concerned itself with pressure effects only and the second reference gave the initial version of the work dealing with the thermal effects, this report gives more recent results concerning specifically thermal loads. As before, the results are limited to inside corner radii between cylinders and flat heat closures. Similarly, the analysis is based on a systematic series of finite element calculations with the significant parameters covering the field of useful design boundaries. The results are condensed into a rapid method for the determination of peak stresses needed for performing fatigue analysis in pressure vessels subjected to a significant, variable thermal load. The paper takes into account the influence of the film coefficient, temporal temperature variations, and material properties. A set of coefficients provides a convenient method of stress evaluation suitable for design purposes.

  5. Preliminary demonstration of a robust controller design method

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1980-01-01

    Alternative computational procedures for obtaining a feedback control law which yields a control signal based on measurable quantitites are evaluated. The three methods evaluated are: (1) the standard linear quadratic regulator design model; (2) minimization of the norm of the feedback matrix, k via nonlinear programming subject to the constraint that the closed loop eigenvalues be in a specified domain in the complex plane; and (3) maximize the angles between the closed loop eigenvectors in combination with minimizing the norm of K also via the constrained nonlinear programming. The third or robust design method was chosen to yield a closed loop system whose eigenvalues are insensitive to small changes in the A and B matrices. The relationship between orthogonality of closed loop eigenvectors and the sensitivity of closed loop eigenvalues is described. Computer programs are described.

  6. National Tuberculosis Genotyping and Surveillance Network: Design and Methods

    PubMed Central

    Braden, Christopher R.; Schable, Barbara A.; Onorato, Ida M.

    2002-01-01

    The National Tuberculosis Genotyping and Surveillance Network was established in 1996 to perform a 5-year, prospective study of the usefulness of genotyping Mycobacterium tuberculosis isolates to tuberculosis control programs. Seven sentinel sites identified all new cases of tuberculosis, collected information on patients and contacts, and obtained patient isolates. Seven genotyping laboratories performed DNA fingerprinting analysis by the international standard IS6110 method. BioImage Whole Band Analyzer software was used to analyze patterns, and distinct patterns were assigned unique designations. Isolates with six or fewer bands on IS6110 patterns were also spoligotyped. Patient data and genotyping designations were entered in a relational database and merged with selected variables from the national surveillance database. In two related databases, we compiled the results of routine contact investigations and the results of investigations of the relationships of patients who had isolates with matching genotypes. We describe the methods used in the study. PMID:12453342

  7. Optical design and active optics methods in astronomy

    NASA Astrophysics Data System (ADS)

    Lemaitre, Gerard R.

    2013-03-01

    Optical designs for astronomy involve implementation of active optics and adaptive optics from X-ray to the infrared. Developments and results of active optics methods for telescopes, spectrographs and coronagraph planet finders are presented. The high accuracy and remarkable smoothness of surfaces generated by active optics methods also allow elaborating new optical design types with high aspheric and/or non-axisymmetric surfaces. Depending on the goal and performance requested for a deformable optical surface analytical investigations are carried out with one of the various facets of elasticity theory: small deformation thin plate theory, large deformation thin plate theory, shallow spherical shell theory, weakly conical shell theory. The resulting thickness distribution and associated bending force boundaries can be refined further with finite element analysis.

  8. Simplified Analysis Methods for Primary Load Designs at Elevated Temperatures

    SciTech Connect

    Carter, Peter; Jetter, Robert I; Sham, Sam

    2011-01-01

    The use of simplified (reference stress) analysis methods is discussed and illustrated for primary load high temperature design. Elastic methods are the basis of the ASME Section III, Subsection NH primary load design procedure. There are practical drawbacks with this approach, particularly for complex geometries and temperature gradients. The paper describes an approach which addresses these difficulties through the use of temperature-dependent elastic-perfectly plastic analysis. Correction factors are defined to address difficulties traditionally associated with discontinuity stresses, inelastic strain concentrations and multiaxiality. A procedure is identified to provide insight into how this approach could be implemented but clearly there is additional work to be done to define and clarify the procedural steps to bring it to the point where it could be adapted into code language.

  9. Helicopter flight-control design using an H(2) method

    NASA Technical Reports Server (NTRS)

    Takahashi, Marc D.

    1991-01-01

    Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.

  10. A Requirements-Driven Optimization Method for Acoustic Treatment Design

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2016-01-01

    Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.

  11. Application of an optimization method to high performance propeller designs

    NASA Technical Reports Server (NTRS)

    Li, K. C.; Stefko, G. L.

    1984-01-01

    The application of an optimization method to determine the propeller blade twist distribution which maximizes propeller efficiency is presented. The optimization employs a previously developed method which has been improved to include the effects of blade drag, camber and thickness. Before the optimization portion of the computer code is used, comparisons of calculated propeller efficiencies and power coefficients are made with experimental data for one NACA propeller at Mach numbers in the range of 0.24 to 0.50 and another NACA propeller at a Mach number of 0.71 to validate the propeller aerodynamic analysis portion of the computer code. Then comparisons of calculated propeller efficiencies for the optimized and the original propellers show the benefits of the optimization method in improving propeller performance. This method can be applied to the aerodynamic design of propellers having straight, swept, or nonplanar propeller blades.

  12. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  13. Synthesis of aircraft structures using integrated design and analysis methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Goetz, R. C.

    1978-01-01

    A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.

  14. Design Methods for Load-bearing Elements from Crosslaminated Timber

    NASA Astrophysics Data System (ADS)

    Vilguts, A.; Serdjuks, D.; Goremikins, V.

    2015-11-01

    Cross-laminated timber is an environmentally friendly material, which possesses a decreased level of anisotropy in comparison with the solid and glued timber. Cross-laminated timber could be used for load-bearing walls and slabs of multi-storey timber buildings as well as decking structures of pedestrian and road bridges. Design methods of cross-laminated timber elements subjected to bending and compression with bending were considered. The presented methods were experimentally validated and verified by FEM. Two cross-laminated timber slabs were tested at the action of static load. Pine wood was chosen as a board's material. Freely supported beam with the span equal to 1.9 m, which was loaded by the uniformly distributed load, was a design scheme of the considered plates. The width of the plates was equal to 1 m. The considered cross-laminated timber plates were analysed by FEM method. The comparison of stresses acting in the edge fibres of the plate and the maximum vertical displacements shows that both considered methods can be used for engineering calculations. The difference between the results obtained experimentally and analytically is within the limits from 2 to 31%. The difference in results obtained by effective strength and stiffness and transformed sections methods was not significant.

  15. Asymmetric MRI magnet design using a hybrid numerical method.

    PubMed

    Zhao, H; Crozier, S; Doddrell, D M

    1999-12-01

    This paper describes a hybrid numerical method for the design of asymmetric magnetic resonance imaging magnet systems. The problem is formulated as a field synthesis and the desired current density on the surface of a cylinder is first calculated by solving a Fredholm equation of the first kind. Nonlinear optimization methods are then invoked to fit practical magnet coils to the desired current density. The field calculations are performed using a semi-analytical method. A new type of asymmetric magnet is proposed in this work. The asymmetric MRI magnet allows the diameter spherical imaging volume to be positioned close to one end of the magnet. The main advantages of making the magnet asymmetric include the potential to reduce the perception of claustrophobia for the patient, better access to the patient by attending physicians, and the potential for reduced peripheral nerve stimulation due to the gradient coil configuration. The results highlight that the method can be used to obtain an asymmetric MRI magnet structure and a very homogeneous magnetic field over the central imaging volume in clinical systems of approximately 1.2 m in length. Unshielded designs are the focus of this work. This method is flexible and may be applied to magnets of other geometries. PMID:10579958

  16. Development of quality-by-design analytical methods.

    PubMed

    Vogt, Frederick G; Kord, Alireza S

    2011-03-01

    Quality-by-design (QbD) is a systematic approach to drug development, which begins with predefined objectives, and uses science and risk management approaches to gain product and process understanding and ultimately process control. The concept of QbD can be extended to analytical methods. QbD mandates the definition of a goal for the method, and emphasizes thorough evaluation and scouting of alternative methods in a systematic way to obtain optimal method performance. Candidate methods are then carefully assessed in a structured manner for risks, and are challenged to determine if robustness and ruggedness criteria are satisfied. As a result of these studies, the method performance can be understood and improved if necessary, and a control strategy can be defined to manage risk and ensure the method performs as desired when validated and deployed. In this review, the current state of analytical QbD in the industry is detailed with examples of the application of analytical QbD principles to a range of analytical methods, including high-performance liquid chromatography, Karl Fischer titration for moisture content, vibrational spectroscopy for chemical identification, quantitative color measurement, and trace analysis for genotoxic impurities. PMID:21280050

  17. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  18. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.

  19. Methods of compliance evaluation for ocean outfall design and analysis.

    PubMed

    Mukhtasor; Lye, L M; Sharp, J J

    2002-10-01

    Sewage discharge from an ocean outfall is subject to water quality standards, which are often stated in probabilistic terms. Monte Carlo simulation (MCS) has been used in the past to evaluate the ability of a designed outfall to meet water quality standards or compliance guidelines associated with sewage discharges. In this study, simpler and less computer-intensive probabilistic methods are considered. The probabilistic methods evaluated are the popular mean first-order second-moment (MFOSM) and the advance first-order second-moment (AFOSM) methods. Available data from the Spaniard's Bay Outfall located on the east coast of New-foundland, Canada, were used as inputs for a case study. Both methods were compared with results given by MCS. It was found that AFOSM gave a good approximation of the failure probability for total coliform concentration at points remote from the outfall. However, MFOSM was found to be better when considering only the initial dilutions between the discharge point and the surface. Reasons for the different results may be the difference in complexity of the performance function in both cases. This study does not recommend the use of AFOSM for failure analysis in ocean outfall design and analysis because the analysis requires computational efforts similar to MCS. With the advancement of computer technology, simulation techniques, available software, and its flexibility in handling complex situations, MCS is still the best choice for failure analysis of ocean outfalls when data or estimates on the parameters involved are available or can be assumed. PMID:12481920

  20. Improved Method of Design for Folding Inflatable Shells

    NASA Technical Reports Server (NTRS)

    Johnson, Christopher J.

    2009-01-01

    An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One particularly difficult problem is that of mathematically defining fold lines on a gore pattern in a double- curvature region. Moreover, because the fold lines in a double-curvature region tend to be curved, there is a practical problem of how to implement the folds. Another problem is that of modifying the basic gore shapes and sizes for the various layers so that when they are folded as part of the integral structure, they do not mechanically interfere with each other at the fold lines. Heretofore, it has been a common practice to design an inflatable shell to be assembled in the deployed configuration, without regard for the need to fold it into compact form. Typically, the result has been that folding has been a difficult, time-consuming process resulting in a An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One

  1. A geometric design method for side-stream distillation columns

    SciTech Connect

    Rooks, R.E.; Malone, M.F.; Doherty, M.F.

    1996-10-01

    A side-stream distillation column may replace two simple columns for some applications, sometimes at considerable savings in energy and investment. This paper describes a geometric method for the design of side-stream columns; the method provides rapid estimates of equipment size and utility requirements. Unlike previous approaches, the geometric method is applicable to nonideal and azeotropic mixtures. Several example problems for both ideal and nonideal mixtures, including azeotropic mixtures containing distillation boundaries, are given. The authors make use of the fact that azeotropes or pure components whose classification in the residue curve map is a saddle can be removed as side-stream products. Significant process simplifications are found among some alternatives in example problems, leading to flow sheets with fewer units and a substantial savings in vapor rate.

  2. Sequence design in lattice models by graph theoretical methods

    NASA Astrophysics Data System (ADS)

    Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.

    2001-01-01

    A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).

  3. Modified method to improve the design of Petlyuk distillation columns

    PubMed Central

    2014-01-01

    Background A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. Results The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. Conclusions The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads. PMID:25061476

  4. A geometric method for optimal design of color filter arrays.

    PubMed

    Hao, Pengwei; Li, Yan; Lin, Zhouchen; Dubois, Eric

    2011-03-01

    A color filter array (CFA) used in a digital camera is a mosaic of spectrally selective filters, which allows only one color component to be sensed at each pixel. The missing two components of each pixel have to be estimated by methods known as demosaicking. The demosaicking algorithm and the CFA design are crucial for the quality of the output images. In this paper, we present a CFA design methodology in the frequency domain. The frequency structure, which is shown to be just the symbolic DFT of the CFA pattern (one period of the CFA), is introduced to represent images sampled with any rectangular CFAs in the frequency domain. Based on the frequency structure, the CFA design involves the solution of a constrained optimization problem that aims at minimizing the demosaicking error. To decrease the number of parameters and speed up the parameter searching, the optimization problem is reformulated as the selection of geometric points on the boundary of a convex polygon or the surface of a convex polyhedron. Using our methodology, several new CFA patterns are found, which outperform the currently commercialized and published ones. Experiments demonstrate the effectiveness of our CFA design methodology and the superiority of our new CFA patterns. PMID:20858581

  5. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  6. A formal method for early spacecraft design verification

    NASA Astrophysics Data System (ADS)

    Fischer, P. M.; Ludtke, D.; Schaus, V.; Gerndt, A.

    In the early design phase of a spacecraft, various aspects of the system under development are described and modeled using parameters such as masses, power consumption or data rates. In particular power and data parameters are special since their values can change depending on the spacecrafts operational mode. These mode-dependent parameters can be easily verified to static requirements like a maximumdata rate. Such quick verifications allow the engineers to check the design after every change they apply. In contrast, requirements concerning the mission lifetime such as the amount of downlinked data during the whole mission, demands a more complex procedure. We propose an executable model together with a simulation framework to evaluate complex mission scenarios. In conjunction with a formalized specification of mission requirements it allows a quick verification by means of formal methods.

  7. Collocation methods for distillation design. 2: Applications for distillation

    SciTech Connect

    Huss, R.S.; Westerberg, A.W.

    1996-05-01

    The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.

  8. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  9. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  10. A Method for Designing CDO Conformed to Investment Parameters

    NASA Astrophysics Data System (ADS)

    Nakae, Tatsuya; Moritsu, Toshiyuki; Komoda, Norihisa

    We propose a method for designing CDO (Collateralized Debt Obligation) that meets investor needs about attributes of CDO. It is demonstrated that adjusting attributes (that are credit capability and issue amount) of CDO to investors' preferences causes a capital loss risk that the agent takes. We formulate a CDO optimization problem by defining an objective function using the above risk and by setting constraints that arise from investor needs and a risk premium that is paid for the agent. Our prototype experiment, in which fictitious underlying obligations and investor needs are given, verifies that CDOs can be designed without opportunity loss and dead stock loss, and that the capital loss is not more than thousandth part of the amount of annual payment under guarantee for small and midium-sized enterprises by a general credit guarantee institution.

  11. Hardware architecture design of a fast global motion estimation method

    NASA Astrophysics Data System (ADS)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  12. Conceptual Design Method Developed for Advanced Propulsion Nozzles

    NASA Technical Reports Server (NTRS)

    Nadell, Shari-Beth; Barnhart, Paul J.

    1998-01-01

    As part of a contract with the NASA Lewis Research Center, a simple, accurate method of predicting the performance characteristics of a nozzle design has been developed for use in conceptual design studies. The Nozzle Performance Analysis Code (NPAC) can predict the on- and off-design performance of axisymmetric or two-dimensional convergent and convergent-divergent nozzle geometries. NPAC accounts for the effects of overexpansion or underexpansion, flow divergence, wall friction, heat transfer, and small mass addition or loss across surfaces when the nozzle gross thrust and gross thrust coefficient are being computed. NPAC can be used to predict the performance of a given nozzle design or to develop a preliminary nozzle system design for subsequent analysis. The input required by NPAC consists of a simple geometry definition of the nozzle surfaces, the location of key nozzle stations (entrance, throat, exit), and the nozzle entrance flow properties. NPAC performs three analysis "passes" on the nozzle geometry. First, an isentropic control volume analysis is performed to determine the gross thrust and gross thrust coefficient of the nozzle. During the second analysis pass, the skin friction and heat transfer losses are computed. The third analysis pass couples the effects of wall shear and heat transfer with the initial internal nozzle flow solutions to produce a system of equations that is solved at steps along the nozzle geometry. Small mass additions or losses, such as those resulting from leakage or bleed flow, can be included in the model at specified geometric sections. A final correction is made to account for divergence losses that are incurred if the nozzle exit flow is not purely axial.

  13. Design of braided composite tubes by numerical analysis method

    SciTech Connect

    Hamada, Hiroyuki; Fujita, Akihiro; Maekawa, Zenichiro; Nakai, Asami; Yokoyama, Atsushi

    1995-11-01

    Conventional composite laminates have very poor strength through thickness and as a result are limited in their application for structural parts with complex shape. In this paper, the design for braided composite tube was proposed. The concept of analysis model which involved from micro model to macro model was presented. This method was applied to predict bending rigidity and initial fracture stress under bending load of the braided tube. The proposed analytical procedure can be included as a unit in CAE system for braided composites.

  14. Methods to Design and Synthesize Antibody-Drug Conjugates (ADCs)

    PubMed Central

    Yao, Houzong; Jiang, Feng; Lu, Aiping; Zhang, Ge

    2016-01-01

    Antibody-drug conjugates (ADCs) have become a promising targeted therapy strategy that combines the specificity, favorable pharmacokinetics and biodistributions of antibodies with the destructive potential of highly potent drugs. One of the biggest challenges in the development of ADCs is the application of suitable linkers for conjugating drugs to antibodies. Recently, the design and synthesis of linkers are making great progress. In this review, we present the methods that are currently used to synthesize antibody-drug conjugates by using thiols, amines, alcohols, aldehydes and azides. PMID:26848651

  15. An alternate method for designing dipole magnet ends

    SciTech Connect

    Pope, W.L.; Green, M.A.; Peters, C.; Caspi, S.; Taylor, C.E.

    1988-08-01

    Small bore superconducting dipole magnets, such as those for the SSC, often have problems in the ends. These problems can often be alleviated by spreading out the end windings so that the conductor sees less deformation. This paper presents a new procedure for designing dipole magnet ends which can be applied to magnets with either cylindrical or conical bulged ends to have integrated field multipoles which meet the constraints imposed by the SSC lattice. The method described here permits one to couple existing multiparameter optimization routines (i.e., MINUIT with suitable independent parameter constraints) with a computer code DIPEND, which describes the multiples, so that one can meet any reasonable objective (i.e., minimizing integrated sextupole and decapole). This paper will describe how the computer method was used to analyze the bulged conical ends for an SSC dipole. 6 refs, 6 figs, 2 tabs.

  16. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  17. A novel observer design method for neural mass models

    NASA Astrophysics Data System (ADS)

    Liu, Xian; Miao, Dong-Kai; Gao, Qing; Xu, Shi-Yun

    2015-09-01

    Neural mass models can simulate the generation of electroencephalography (EEG) signals with different rhythms, and therefore the observation of the states of these models plays a significant role in brain research. The structure of neural mass models is special in that they can be expressed as Lurie systems. The developed techniques in Lurie system theory are applicable to these models. We here provide a new observer design method for neural mass models by transforming these models and the corresponding error systems into nonlinear systems with Lurie form. The purpose is to establish appropriate conditions which ensure the convergence of the estimation error. The effectiveness of the proposed method is illustrated by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473245, 61004050, and 51207144).

  18. A Method of Trajectory Design for Manned Asteroids Exploration

    NASA Astrophysics Data System (ADS)

    Gan, Q. B.; Zhang, Y.; Zhu, Z. F.; Han, W. H.; Dong, X.

    2014-11-01

    A trajectory optimization method of the nuclear propulsion manned asteroids exploration is presented. In the case of launching between 2035 and 2065, based on the Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory in the feasible regions is selected by pruning the flight sequences. Setting the nuclear propulsion flight plan as propel-coast-propel, and taking the minimal mass of aircraft departure as the index, the nuclear propulsion flight trajectory is separately optimized using a hybrid method. With the initial value of the optimized local parameters of each three phases, the global parameters are jointedly optimized. At last, the minimal departure mass trajectory design result is given.

  19. Impact design methods for ceramic components in gas turbine engines

    SciTech Connect

    Song, J.; Cuccio, J.; Kington, H. . Garrett Auxilliary Power Division)

    1993-01-01

    Garrett Auxiliary Power Division of Allied-Signal Aerospace Company is developing methods to design ceramic turbine components with improved impact resistance. In an ongoing research effort under the DOE/NASA-funded Advanced Turbine Technology Applications Project (ATTAP), two different modes of impact damage have been identified and characterized: local damage and structural damage. Local impact damage to Si[sub 3]N[sub 4] impacted by spherical projectiles usually takes the form of ring and/or radial cracks in the vicinity of the impact point. Baseline data from Si[sub 3]N[sub 4] test bars impacted by 1.588-mm (0.0625-in.) diameter NC-132 projectiles indicates the critical velocity at which the probability of detecting surface cracks is 50 percent equaled 130 m/s (426 ft/sec). A microphysics-based model that assumes damage to be in the form of microcracks has been developed to predict local impact damage. Local stress and strain determine microcrack nucleation and propagation, which in turn alter local stress and strain through modulus degradation. Material damage is quantified by a damage parameter related to the volume fraction of microcracks. The entire computation has been incorporated into the EPIC computer code. Model capability is being demonstrated by simulating instrumented plate impact and particle impact tests. Structural impact damage usually occurs in the form of fast fracture caused by bending stresses that exceed the material strength. The EPIC code has been successfully used to predict radial and axial blade failures from impacts by various size particles. This method is also being used in conjunction with Taguchi experimental methods to investigate the effects of design parameters on turbine blade impact resistance. It has been shown that significant improvement in impact resistance can be achieved by using the configuration recommended by Taguchi methods.

  20. Novel computational methods to design protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.

  1. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  2. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ...Notice is hereby given that the Environmental Protection Agency (EPA) has designated, in accordance with 40 CFR Part 53, three new equivalent methods: One for measuring concentrations of nitrogen dioxide (NO2) and two for measuring concentrations of lead (Pb) in the ambient...

  3. Simplified design method for shear-valve magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Ding, Yang; Zhang, Lu; Zhu, Haitao; Li, Zhongxian

    2014-12-01

    Based on the Bingham parallel-plate model, a simplified design method of shear-valve magnetorheological (MR) dampers is proposed considering the magnetic circuit optimization. Correspondingly, a new MR damper with a full-length effective damping path is proposed. The prototype dampers are also fabricated and studied numerically and experimentally. According to the test results, the Bingham parallel-plate model is further modified to obtain a damping force prediction model of the proposed MR dampers. This prediction model considers the magnetic saturation phenomenon. The study indicates that the proposed simplified design method is simple, effective and reliable. The maximum damping force of the proposed MR dampers with a full-length effective damping path is at least twice as large as those of conventional MR dampers. The dynamic range of damping force increases by at least 70%. The proposed damping force prediction model considers the magnetic saturation phenomenon and it can realize the actual characteristic of MR fluids. The model is able to predict the actual damping force of MR dampers precisely.

  4. A new method of dual FOV optical system design

    NASA Astrophysics Data System (ADS)

    Zhang, Liang

    2009-07-01

    With the development of scientific technologies, infrared imaging technology has been applied in the fields of industry, medical treatment, and national defense and so on. Infrared detection has the advantages of looking through the smoke, fog, haze, snow, and also could avoid the affection of battlefield flash. Hence, it could achieve the long distance and all-weather scout, especially in nighttime and badness weather conditions.All kinds of single-FOV, dual-FOV, multi-FOV and continuous zoom optical systems have been applied more and more abroad with the research and application of infrared imaging technologies. Therefore, the research of all sorts of dual FOV optical systems would be more important. The system belongs to simple zoom optical systems by having two fields of view. The zoom methods comprise of single zoom, rotary zoom, radial zoom, axial zoom and so on. Basing on the analysis of zoom methods, a new method of zoom optical system has been developed, which realized the dual FOV optical system by sharing secondary imaging lenses. This design method could make the results approaching to diffraction limit, and improve the precision of optical axial. It also has decreased the moving parts and reduced the difficulty of assembly of system.

  5. Development of Analysis Methods for Designing with Composites

    NASA Technical Reports Server (NTRS)

    Madenci, E.

    1999-01-01

    The project involved the development of new analysis methods to achieve efficient design of composite structures. We developed a complex variational formulation to analyze the in-plane and bending coupling response of an unsymmetrically laminated plate with an elliptical cutout subjected to arbitrary edge loading as shown in Figure 1. This formulation utilizes four independent complex potentials that satisfy the coupled in-plane and bending equilibrium equations, thus eliminating the area integrals from the strain energy expression. The solution to a finite geometry laminate under arbitrary loading is obtained by minimizing the total potential energy function and solving for the unknown coefficients of the complex potentials. The validity of this approach is demonstrated by comparison with finite element analysis predictions for a laminate with an inclined elliptical cutout under bi-axial loading.The geometry and loading of this laminate with a lay-up of [-45/45] are shown in Figure 2. The deformed configuration shown in Figure 3 reflects the presence of bending-stretching coupling. The validity of the present method is established by comparing the out-of-plane deflections along the boundary of the elliptical cutout from the present approach with those of the finite element method. The comparison shown in Figure 4 indicates remarkable agreement. The details of this method are described in a manuscript by Madenci et al. (1998).

  6. A New Aerodynamic Data Dispersion Method for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Pinier, Jeremy T.

    2011-01-01

    A novel method for implementing aerodynamic data dispersion analysis is herein introduced. A general mathematical approach combined with physical modeling tailored to the aerodynamic quantity of interest enables the generation of more realistically relevant dispersed data and, in turn, more reasonable flight simulation results. The method simultaneously allows for the aerodynamic quantities and their derivatives to be dispersed given a set of non-arbitrary constraints, which stresses the controls model in more ways than with the traditional bias up or down of the nominal data within the uncertainty bounds. The adoption and implementation of this new method within the NASA Ares I Crew Launch Vehicle Project has resulted in significant increases in predicted roll control authority, and lowered the induced risks for flight test operations. One direct impact on launch vehicles is a reduced size for auxiliary control systems, and the possibility of an increased payload. This technique has the potential of being applied to problems in multiple areas where nominal data together with uncertainties are used to produce simulations using Monte Carlo type random sampling methods. It is recommended that a tailored physics-based dispersion model be delivered with any aerodynamic product that includes nominal data and uncertainties, in order to make flight simulations more realistic and allow for leaner spacecraft designs.

  7. Nanobiological studies on drug design using molecular mechanic method

    PubMed Central

    Ghaheh, Hooria Seyedhosseini; Mousavi, Maryam; Araghi, Mahmood; Rasoolzadeh, Reza; Hosseini, Zahra

    2015-01-01

    Background: Influenza H1N1 is very important worldwide and point mutations that occur in the virus gene are a threat for the World Health Organization (WHO) and druggists, since they could make this virus resistant to the existing antibiotics. Influenza epidemics cause severe respiratory illness in 30 to 50 million people and kill 250,000 to 500,000 people worldwide every year. Nowadays, drug design is not done through trial and error because of its cost and waste of time; therefore bioinformatics studies is essential for designing drugs. Materials and Methods: This paper, infolds a study on binding site of Neuraminidase (NA) enzyme, (that is very important in drug design) in 310K temperature and different dielectrics, for the best drug design. Information of NA enzyme was extracted from Protein Data Bank (PDB) and National Center for Biotechnology Information (NCBI) websites. The new sequences of N1 were downloaded from the NCBI influenza virus sequence database. Drug binding sites were assimilated and homologized modeling using Argus lab 4.0, HyperChem 6.0 and Chem. D3 softwares. Their stability was assessed in different dielectrics and temperatures. Result: Measurements of potential energy (Kcal/mol) of binding sites of NA in different dielectrics and 310K temperature revealed that at time step size = 0 pSec drug binding sites have maximum energy level and at time step size = 100 pSec have maximum stability and minimum energy. Conclusions: Drug binding sites are more dependent on dielectric constants rather than on temperature and the optimum dielectric constant is 39/78. PMID:26605248

  8. RFQ Designs and Beam-Loss Distributions for IFMIF

    SciTech Connect

    Jameson, Robert A

    2007-01-01

    The IFMIF 125 mA cw 40 MeV accelerators will set an intensity record. Minimization of particle loss along the accelerator is a top-level requirement and requires sophisticated design intimately relating the accelerated beam and the accelerator structure. Such design technique, based on the space-charge physics of linear accelerators (linacs), is used in this report in the development of conceptual designs for the Radio-Frequency-Quadrupole (RFQ) section of the IFMIF accelerators. Design comparisons are given for the IFMIF CDR Equipartitioned RFQ, a CDR Alternative RFQ, and new IFMIF Post-CDR Equipartitioned RFQ designs. Design strategies are illustrated for combining several desirable characteristics, prioritized as minimum beam loss at energies above ~ 1 MeV, low rf power, low peak field, short length, high percentage of accelerated particles. The CDR design has ~0.073% losses above 1 MeV, requires ~1.1 MW rf structure power, has KP factor 1.7,is 12.3 m long, and accelerates ~89.6% of the input beam. A new Post-CDR design has ~0.077% losses above 1 MeV, requires ~1.1 MW rf structure power, has KP factor 1.7 and ~8 m length, and accelerates ~97% of the input beam. A complete background for the designs is given, and comparisons are made. Beam-loss distributions are used as input for nuclear physics simulations of radioactivity effects in the IFMIF accelerator hall, to give information for shielding, radiation safety and maintenance design. Beam-loss distributions resulting from a ~1M particle input distribution representative of the IFMIF ECR ion source are presented. The simulations reported were performed with a consistent family of codes. Relevant comparison with other codes has not been possible as their source code is not available. Certain differences have been noted but are not consistent over a broad range of designs and parameter range. The exact transmission found by any of these codes should be treated as indicative, as each has various sensitivities in

  9. Formal methods in the design of Ada 1995

    NASA Technical Reports Server (NTRS)

    Guaspari, David

    1995-01-01

    Formal, mathematical methods are most useful when applied early in the design and implementation of a software system--that, at least, is the familiar refrain. I will report on a modest effort to apply formal methods at the earliest possible stage, namely, in the design of the Ada 95 programming language itself. This talk is an 'experience report' that provides brief case studies illustrating the kinds of problems we worked on, how we approached them, and the extent (if any) to which the results proved useful. It also derives some lessons and suggestions for those undertaking future projects of this kind. Ada 95 is the first revision of the standard for the Ada programming language. The revision began in 1988, when the Ada Joint Programming Office first asked the Ada Board to recommend a plan for revising the Ada standard. The first step in the revision was to solicit criticisms of Ada 83. A set of requirements for the new language standard, based on those criticisms, was published in 1990. A small design team, the Mapping Revision Team (MRT), became exclusively responsible for revising the language standard to satisfy those requirements. The MRT, from Intermetrics, is led by S. Tucker Taft. The work of the MRT was regularly subject to independent review and criticism by a committee of distinguished Reviewers and by several advisory teams--for example, the two User/Implementor teams, each consisting of an industrial user (attempting to make significant use of the new language on a realistic application) and a compiler vendor (undertaking, experimentally, to modify its current implementation in order to provide the necessary new features). One novel decision established the Language Precision Team (LPT), which investigated language proposals from a mathematical point of view. The LPT applied formal mathematical analysis to help improve the design of Ada 95 (e.g., by clarifying the language proposals) and to help promote its acceptance (e.g., by identifying a

  10. Learning physics: A comparative analysis between instructional design methods

    NASA Astrophysics Data System (ADS)

    Mathew, Easow

    The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p < .01) in academic performance between students who participated in traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in