Sample records for equipartitioned design method

  1. Holographic equipartition from first order action

    NASA Astrophysics Data System (ADS)

    Wang, Jingbo

    2017-12-01

    Recently, the idea that gravity is emergent has attract many people's attention. The "Emergent Gravity Paradigm" is a program that develop this idea from the thermodynamical point of view. It expresses the Einstein equation in the language of thermodynamics. A key equation in this paradigm is the holographic equipartition which says that, in all static spacetimes, the degrees of freedom on the boundary equal those in the bulk. And the time evolution of spacetime is drove by the departure from the holographic equipartition. In this paper, we get the holographic equipartition and its generalization from the first order formalism, that is, the connection and its conjugate momentum are considered to be the canonical variables. The final results have similar structure as those from the metric formalism. It gives another proof of holographic equipartition.

  2. The two Faces of Equipartition

    NASA Astrophysics Data System (ADS)

    Sanchez-Sesma, F. J.; Perton, M.; Rodriguez-Castellanos, A.; Campillo, M.; Weaver, R. L.; Rodriguez, M.; Prieto, G.; Luzon, F.; McGarr, A.

    2008-12-01

    Equipartition is good. Beyond its philosophical implications, in many instances of statistical physics it implies that the available kinetic and potential elastic energy, in phase space, is distributed in the same fixed proportions among the possible "states". There are at least two distinct and complementary descriptions of such states in a diffuse elastic wave field u(r,t). One asserts that u may be represented as an incoherent isotropic superposition of incident plane waves of different polarizations. Each type of wave has an appropriate share of the available energy. This definition introduced by Weaver is similar to the room acoustics notion of a diffuse field, and it suffices to permit prediction of field correlations. The other description assumes that the degrees of freedom of the system, in this case, the kinetic energy densities, are all incoherently excited with equal expected amplitude. This definition, introduced by Maxwell, is also familiar from room acoustics using the normal modes of vibration within an arbitrarily large body. Usually, to establish if an elastic field is diffuse and equipartitioned only the first description has been applied, which requires the separation of dilatational and shear waves using carefully designed experiments. When the medium is bounded by an interface, waves of other modes, for example Rayleigh waves, complicate the measurement of these energies. As a consequence, it can be advantageous to use the second description. Moreover, each spatial component of the energy densities is linked, when an elastic field is diffuse and equipartitioned, to the component of the imaginary part of the Green function at the source. Accordingly, one can use the second description to retrieve the Green function and obtain more information about the medium. The equivalence between the two descriptions of equipartition are given for an infinite space and extended to the case of a half-space. These two descriptiosn are equivalent thanks to the

  3. Holographic equipartition and the maximization of entropy

    NASA Astrophysics Data System (ADS)

    Krishna, P. B.; Mathew, Titus K.

    2017-09-01

    The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.

  4. No energy equipartition in globular clusters

    NASA Astrophysics Data System (ADS)

    Trenti, Michele; van der Marel, Roeland

    2013-11-01

    It is widely believed that globular clusters evolve over many two-body relaxation times towards a state of energy equipartition, so that velocity dispersion scales with stellar mass as σ ∝ m-η with η = 0.5. We show here that this is incorrect, using a suite of direct N-body simulations with a variety of realistic initial mass functions and initial conditions. No simulated system ever reaches a state close to equipartition. Near the centre, the luminous main-sequence stars reach a maximum ηmax ≈ 0.15 ± 0.03. At large times, all radial bins convergence on an asymptotic value η∞ ≈ 0.08 ± 0.02. The development of this `partial equipartition' is strikingly similar across our simulations, despite the range of different initial conditions employed. Compact remnants tend to have higher η than main-sequence stars (but still η < 0.5), due to their steeper (evolved) mass function. The presence of an intermediate-mass black hole (IMBH) decreases η, consistent with our previous findings of a quenching of mass segregation under these conditions. All these results can be understood as a consequence of the Spitzer instability for two-component systems, extended by Vishniac to a continuous mass spectrum. Mass segregation (the tendency of heavier stars to sink towards the core) has often been studied observationally, but energy equipartition has not. Due to the advent of high-quality proper motion data sets from the Hubble Space Telescope, it is now possible to measure η for real clusters. Detailed data-model comparisons open up a new observational window on globular cluster dynamics and evolution. A first comparison of our simulations to observations of Omega Cen yields good agreement, supporting the view that globular clusters are not generally in energy equipartition. Modelling techniques that assume equipartition by construction (e.g. multi-mass Michie-King models) are approximate at best.

  5. A novel look at energy equipartition in globular clusters

    NASA Astrophysics Data System (ADS)

    Bianchini, P.; van de Ven, G.; Norris, M. A.; Schinnerer, E.; Varri, A. L.

    2016-06-01

    Two-body interactions play a major role in shaping the structural and dynamical properties of globular clusters (GCs) over their long-term evolution. In particular, GCs evolve towards a state of partial energy equipartition that induces a mass dependence in their kinematics. By using a set of Monte Carlo cluster simulations evolved in quasi-isolation, we show that the stellar mass dependence of the velocity dispersion σ(m) can be described by an exponential function σ2 ∝ exp (-m/meq), with the parameter meq quantifying the degree of partial energy equipartition of the systems. This simple parametrization successfully captures the behaviour of the velocity dispersion at lower as well as higher stellar masses, that is, the regime where the system is expected to approach full equipartition. We find a tight correlation between the degree of equipartition reached by a GC and its dynamical state, indicating that clusters that are more than about 20 core relaxation times old, have reached a maximum degree of equipartition. This equipartition-dynamical state relation can be used as a tool to characterize the relaxation condition of a cluster with a kinematic measure of the meq parameter. Vice versa, the mass dependence of the kinematics can be predicted knowing the relaxation time solely on the basis of photometric measurements. Moreover, any deviations from this tight relation could be used as a probe of a peculiar dynamical history of a cluster. Finally, our novel approach is important for the interpretation of state-of-the-art Hubble Space Telescope proper motion data, for which the mass dependence of kinematics can now be measured, and for the application of modelling techniques which take into consideration multimass components and mass segregation.

  6. Core plasma design of the compact helical reactor with a consideration of the equipartition effect

    NASA Astrophysics Data System (ADS)

    Goto, T.; Miyazawa, J.; Yanagi, N.; Tamura, H.; Tanaka, T.; Sakamoto, R.; Suzuki, C.; Seki, R.; Satake, S.; Nunami, M.; Yokoyama, M.; Sagara, A.; the FFHR Design Group

    2018-07-01

    Integrated physics analysis of plasma operation scenario of the compact helical reactor FFHR-c1 has been conducted. The DPE method, which predicts radial profiles in a reactor by direct extrapolation from the reference experimental data, has been extended to implement the equipartition effect. Close investigation of the plasma operation regime has been conducted and a candidate plasma operation point of FFHR-c1 has been identified within the parameter regime that has already been confirmed in LHD experiment in view of MHD equilibrium, MHD stability and neoclassical transport.

  7. Thermodynamic laws and equipartition theorem in relativistic Brownian motion.

    PubMed

    Koide, T; Kodama, T

    2011-06-01

    We extend the stochastic energetics to a relativistic system. The thermodynamic laws and equipartition theorem are discussed for a relativistic Brownian particle and the first and the second law of thermodynamics in this formalism are derived. The relation between the relativistic equipartition relation and the rate of heat transfer is discussed in the relativistic case together with the nature of the noise term.

  8. Temperature profile and equipartition law in a Langevin harmonic chain

    NASA Astrophysics Data System (ADS)

    Kim, Sangrak

    2017-09-01

    Temperature profile in a Langevin harmonic chain is explicitly derived and the validity of the equipartition law is checked. First, we point out that the temperature profile in previous studies does not agree with the equipartition law: In thermal equilibrium, the temperature profile deviates from the same temperature distribution against the equipartition law, particularly at the ends of the chain. The matrix connecting temperatures of the heat reservoirs and the temperatures of the harmonic oscillators turns out to be a probability matrix. By explicitly calculating the power spectrum of the probability matrix, we will show that the discrepancy comes from the neglect of the power spectrum in higher frequency ω, which is in decay mode, and related with the imaginary number of wave number q.

  9. Pressure-strain energy redistribution in compressible turbulence: return-to-isotropy versus kinetic-potential energy equipartition

    NASA Astrophysics Data System (ADS)

    Lee, Kurnchul; Venugopal, Vishnu; Girimaji, Sharath S.

    2016-08-01

    Return-to-isotropy and kinetic-potential energy equipartition are two fundamental pressure-moderated energy redistributive processes in anisotropic compressible turbulence. Pressure-strain correlation tensor redistributes energy among various Reynolds stress components and pressure-dilatation is responsible for energy reallocation between dilatational kinetic and potential energies. The competition and interplay between these pressure-based processes are investigated in this study. Direct numerical simulations (DNS) of low turbulent Mach number dilatational turbulence are performed employing the hybrid thermal Lattice Boltzman method (HTLBM). It is found that a tendency towards equipartition precedes proclivity for isotropization. An evolution towards equipartition has a collateral but critical effect on return-to-isotropy. The preferential transfer of energy from strong (rather than weak) Reynolds stress components to potential energy accelerates the isotropization of dilatational fluctuations. Understanding of these pressure-based redistributive processes is critical for developing insight into the character of compressible turbulence.

  10. Deviation from the law of energy equipartition in a small dynamic-random-access memory

    NASA Astrophysics Data System (ADS)

    Carles, Pierre-Alix; Nishiguchi, Katsuhiko; Fujiwara, Akira

    2015-06-01

    A small dynamic-random-access memory (DRAM) coupled with a high charge sensitivity electrometer based on a silicon field-effect transistor is used to study the law of equipartition of energy. By statistically analyzing the movement of single electrons in the DRAM at various temperature and voltage conditions in thermal equilibrium, we are able to observe a behavior that differs from what is predicted by the law of equipartition energy: when the charging energy of the capacitor of the DRAM is comparable to or smaller than the thermal energy kBT/2, random electron motion is ruled perfectly by thermal energy; on the other hand, when the charging energy becomes higher in relation to the thermal energy kBT/2, random electron motion is suppressed which indicates a deviation from the law of equipartition of energy. Since the law of equipartition is analyzed using the DRAM, one of the most familiar devices, we believe that our results are perfectly universal among all electronic devices.

  11. Kinematics of Globular Cluster: new Perspectives of Energy Equipartition from N-body Simulations

    NASA Astrophysics Data System (ADS)

    Kim, Hyunwoo; Pasquato, Mario; Yoon, Suk-jin

    2018-01-01

    Globular clusters (GCs) evolve dynamically through gravitational two-body interactions between stars. We investigated the evolution towards energy equipartition in GCs using direct n-body simulations in NBODY6. If a GC reaches full energy equipartition, the velocity dispersion as a function of stars’ mass becomes a power law with exponent -1/2. However, our n-body simulations never reach full equipartition, which is similar to Trenti & van de Marel (2013) results. Instead we found that in simulations with a shallow mass spectrum the best fit exponent becomes positive slightly before core collapse time. This inversion is a new result, which can be used as a kinematic predictor of core collapse. We are currently exploring applications of this inversion indicator to the detection of intermediate mass black holes.

  12. The holographic principle, the equipartition of energy and Newton’s gravity

    NASA Astrophysics Data System (ADS)

    Sadiq, M.

    2017-12-01

    Assuming the equipartition of energy to hold on a holographic sphere, Erik Verlinde demonstrated that Newton’s gravity follows as an entropic force. Some comments are in place about Verlinde’s assumptions in his derivation. It is pointed out that the holographic principle allows for freedom up to a free scale factor in the choice of Planck scale area while leading to classical gravity. Similarity of this free parameter with the Immirzi parameter of loop quantum gravity is discussed. We point out that the equipartition of energy is inbuilt into the holographic principle and, therefore, need not be assumed from the outset.

  13. Accretion in Radiative Equipartition (AiRE) Disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazdi, Yasaman K.; Afshordi, Niayesh, E-mail: yyazdi@pitp.ca, E-mail: nafshordi@pitp.ca

    2017-07-01

    Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (i.e., inner disk) need to be modified. Here, we present a modification to the Shakura and Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks andmore » show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.« less

  14. Accretion in Radiative Equipartition (AiRE) Disks

    NASA Astrophysics Data System (ADS)

    Yazdi, Yasaman K.; Afshordi, Niayesh

    2017-07-01

    Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (I.e., inner disk) need to be modified. Here, we present a modification to the Shakura & Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks and show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.

  15. Kinetic theory of binary particles with unequal mean velocities and non-equipartition energies

    NASA Astrophysics Data System (ADS)

    Chen, Yanpei; Mei, Yifeng; Wang, Wei

    2017-03-01

    The hydrodynamic conservation equations and constitutive relations for a binary granular mixture composed of smooth, nearly elastic spheres with non-equipartition energies and different mean velocities are derived. This research is aimed to build three-dimensional kinetic theory to characterize the behaviors of two species of particles suffering different forces. The standard Enskog method is employed assuming a Maxwell velocity distribution for each species of particles. The collision components of the stress tensor and the other parameters are calculated from the zeroth- and first-order approximation. Our results demonstrate that three factors, namely the differences between two granular masses, temperatures and mean velocities all play important roles in the stress-strain relation of the binary mixture, indicating that the assumption of energy equipartition and the same mean velocity may not be acceptable. The collision frequency and the solid viscosity increase monotonously with each granular temperature. The zeroth-order approximation to the energy dissipation varies greatly with the mean velocities of both species of spheres, reaching its peak value at the maximum of their relative velocity.

  16. Brightness temperature - obtaining the physical properties of a non-equipartition plasma

    NASA Astrophysics Data System (ADS)

    Nokhrina, E. E.

    2017-06-01

    The limit on the intrinsic brightness temperature, attributed to `Compton catastrophe', has been established being 1012 K. Somewhat lower limit of the order of 1011.5 K is implied if we assume that the radiating plasma is in equipartition with the magnetic field - the idea that explained why the observed cores of active galactic nuclei (AGNs) sustained the limit lower than the `Compton catastrophe'. Recent observations with unprecedented high resolution by the RadioAstron have revealed systematic exceed in the observed brightness temperature. We propose means of estimating the degree of the non-equipartition regime in AGN cores. Coupled with the core-shift measurements, the method allows us to independently estimate the magnetic field strength and the particle number density at the core. We show that the ratio of magnetic energy to radiating plasma energy is of the order of 10-5, which means the flow in the core is dominated by the particle energy. We show that the magnetic field obtained by the brightness temperature measurements may be underestimated. We propose for the relativistic jets with small viewing angles the non-uniform magnetohydrodynamic model and obtain the expression for the magnetic field amplitude about two orders higher than that for the uniform model. These magnetic field amplitudes are consistent with the limiting magnetic field suggested by the `magnetically arrested disc' model.

  17. Gravitational attraction until relativistic equipartition of internal and translational kinetic energies

    NASA Astrophysics Data System (ADS)

    Bulyzhenkov, I. E.

    2018-02-01

    Translational ordering of the internal kinematic chaos provides the Special Relativity referents for the geodesic motion of warm thermodynamical bodies. Taking identical mathematics, relativistic physics of the low speed transport of time-varying heat-energies differs from Newton's physics of steady masses without internal degrees of freedom. General Relativity predicts geodesic changes of the internal heat-energy variable under the free gravitational fall and the geodesic turn in the radial field center. Internal heat variations enable cyclic dynamics of decelerated falls and accelerated takeoffs of inertial matter and its structural self-organization. The coordinate speed of the ordered spatial motion takes maximum under the equipartition of relativistic internal and translational kinetic energies. Observable predictions are discussed for verification/falsification of the principle of equipartition as a new basic for the ordered motion and self-organization in external fields, including gravitational, electromagnetic, and thermal ones.

  18. The relationship between noise correlation and the Green's function in the presence of degeneracy and the absence of equipartition

    USGS Publications Warehouse

    Tsai, V.C.

    2010-01-01

    Recent derivations have shown that when noise in a physical system has its energy equipartitioned into the modes of the system, there is a convenient relationship between the cross correlation of time-series recorded at two points and the Green's function of the system. Here, we show that even when energy is not fully equipartitioned and modes are allowed to be degenerate, a similar (though less general) property holds for equations with wave equation structure. This property can be used to understand why certain seismic noise correlation measurements are successful despite known degeneracy and lack of equipartition on the Earth. No claim to original US government works Journal compilation ?? 2010 RAS.

  19. On the Equipartition of Kinetic Energy in an Ideal Gas Mixture

    ERIC Educational Resources Information Center

    Peliti, L.

    2007-01-01

    A refinement of an argument due to Maxwell for the equipartition of translational kinetic energy in a mixture of ideal gases with different masses is proposed. The argument is elementary, yet it may work as an illustration of the role of symmetry and independence postulates in kinetic theory. (Contains 1 figure.)

  20. Symmetry blockade and its breakdown in energy equipartition of square graphene resonators

    NASA Astrophysics Data System (ADS)

    Wang, Yisen; Zhu, Zhigang; Zhang, Yong; Huang, Liang

    2018-03-01

    The interaction between flexural modes due to nonlinear potentials is critical to heat conductivity and mechanical vibration of two dimensional materials such as graphene. Much effort has been devoted to understanding the underlying mechanism. In this paper, we examine solely the out-of-plane flexural modes and identify their energy flow pathway during the equipartition process. In particular, the modes are grouped into four classes by their distinct symmetries. The couplings are significantly larger within a class than between classes, forming symmetry blockades. As a result, the energy first flows to the modes in the same symmetry class. Breakdown of the symmetry blockade, i.e., inter-class energy flow, starts when the displacement profile becomes complex and the inter-class couplings bear nonneglectable values. The equipartition time follows the stretched exponential law and survives in the thermodynamic limit. These results bring fundamental understandings to the Fermi-Pasta-Ulam problem in two dimensional systems with complex potentials and reveal clearly the physical picture of dynamical interactions between the flexural modes, which will be crucial to the understanding of their contribution in high thermal conductivity and mechanism of energy dissipation that may intrinsically limit the quality factor of the resonator.

  1. Near-equipartition Jets with Log-parabola Electron Energy Distribution and the Blazar Spectral-index Diagrams

    NASA Astrophysics Data System (ADS)

    Dermer, Charles D.; Yan, Dahai; Zhang, Li; Finke, Justin D.; Lott, Benoit

    2015-08-01

    Fermi-LAT analyses show that the γ-ray photon spectral indices {{{Γ }}}γ of a large sample of blazars correlate with the ν {F}ν peak synchrotron frequency {ν }s according to the relation {{{Γ }}}γ =d-k{log} {ν }s. The same function, with different constants d and k, also describes the relationship between {{{Γ }}}γ and peak Compton frequency {ν }{{C}}. This behavior is derived analytically using an equipartition blazar model with a log-parabola description of the electron energy distribution (EED). In the Thomson regime, k={k}{EC}=3b/4 for external Compton (EC) processes and k={k}{SSC}=9b/16 for synchrotron self-Compton (SSC) processes, where b is the log-parabola width parameter of the EED. The BL Lac object Mrk 501 is fit with a synchrotron/SSC model given by the log-parabola EED, and is best fit away from equipartition. Corrections are made to the spectral-index diagrams for a low-energy power-law EED and departures from equipartition, as constrained by absolute jet power. Analytic expressions are compared with numerical values derived from self-Compton and EC scattered γ-ray spectra from Lyα broad-line region and IR target photons. The {{{Γ }}}γ versus {ν }s behavior in the model depends strongly on b, with progressively and predictably weaker dependences on γ-ray detection range, variability time, and isotropic γ-ray luminosity. Implications for blazar unification and blazars as ultra-high energy cosmic-ray sources are discussed. Arguments by Ghisellini et al. that the jet power exceeds the accretion luminosity depend on the doubtful assumption that we are viewing at the Doppler angle.

  2. On the link between energy equipartition and radial variation in the stellar mass function of star clusters

    NASA Astrophysics Data System (ADS)

    Webb, Jeremy J.; Vesperini, Enrico

    2017-01-01

    We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m), where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m) ∝ m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.

  3. Turbulent equipartition pinch of toroidal momentum in spherical torus

    NASA Astrophysics Data System (ADS)

    Hahm, T. S.; Lee, J.; Wang, W. X.; Diamond, P. H.; Choi, G. J.; Na, D. H.; Na, Y. S.; Chung, K. J.; Hwang, Y. S.

    2014-12-01

    We present a new analytic expression for turbulent equipartition (TEP) pinch of toroidal angular momentum originating from magnetic field inhomogeneity of spherical torus (ST) plasmas. Starting from a conservative modern nonlinear gyrokinetic equation (Hahm et al 1988 Phys. Fluids 31 2670), we derive an expression for pinch to momentum diffusivity ratio without using a usual tokamak approximation of B ∝ 1/R which has been previously employed for TEP momentum pinch derivation in tokamaks (Hahm et al 2007 Phys. Plasmas 14 072302). Our new formula is evaluated for model equilibria of National Spherical Torus eXperiment (NSTX) (Ono et al 2001 Nucl. Fusion 41 1435) and Versatile Experiment Spherical Torus (VEST) (Chung et al 2013 Plasma Sci. Technol. 15 244) plasmas. Our result predicts stronger inward pinch for both cases, as compared to the prediction based on the tokamak formula.

  4. Thermodynamic constraints on a varying cosmological-constant-like term from the holographic equipartition law with a power-law corrected entropy

    NASA Astrophysics Data System (ADS)

    Komatsu, Nobuyoshi

    2017-11-01

    A power-law corrected entropy based on a quantum entanglement is considered to be a viable black-hole entropy. In this study, as an alternative to Bekenstein-Hawking entropy, a power-law corrected entropy is applied to Padmanabhan's holographic equipartition law to thermodynamically examine an extra driving term in the cosmological equations for a flat Friedmann-Robertson-Walker universe at late times. Deviations from the Bekenstein-Hawking entropy generate an extra driving term (proportional to the α th power of the Hubble parameter, where α is a dimensionless constant for the power-law correction) in the acceleration equation, which can be derived from the holographic equipartition law. Interestingly, the value of the extra driving term in the present model is constrained by the second law of thermodynamics. From the thermodynamic constraint, the order of the driving term is found to be consistent with the order of the cosmological constant measured by observations. In addition, the driving term tends to be constantlike when α is small, i.e., when the deviation from the Bekenstein-Hawking entropy is small.

  5. Breakdown of equipartition in diffuse fields caused by energy leakage

    NASA Astrophysics Data System (ADS)

    Margerin, L.

    2017-05-01

    Equipartition is a central concept in the analysis of random wavefields which stipulates that in an infinite scattering medium all modes and propagation directions become equally probable at long lapse time in the coda. The objective of this work is to examine quantitatively how this conclusion is affected in an open waveguide geometry, with a particular emphasis on seismological applications. To carry our this task, the problem is recast as a spectral analysis of the radiative transfer equation. Using a discrete ordinate approach, the smallest eigenvalue and associated eigenfunction of the transfer equation, which control the asymptotic intensity distribution in the waveguide, are determined numerically with the aid of a shooting algorithm. The inverse of this eigenvalue may be interpreted as the leakage time of the diffuse waves out of the waveguide. The associated eigenfunction provides the depth and angular distribution of the specific intensity. The effect of boundary conditions and scattering anisotropy is investigated in a series of numerical experiments. Two propagation regimes are identified, depending on the ratio H∗ between the thickness of the waveguide and the transport mean path in the layer. The thick layer regime H∗ > 1 has been thoroughly studied in the literature in the framework of diffusion theory and is briefly considered. In the thin layer regime H∗ < 1, we find that both boundary conditions and scattering anisotropy leave a strong imprint on the leakage effect. A parametric study reveals that in the presence of a flat free surface, the leakage time is essentially controlled by the mean free time of the waves in the layer in the limit H∗ → 0. By contrast, when the free surface is rough, the travel time of ballistic waves propagating through the crust becomes the limiting factor. For fixed H∗, the efficacy of leakage, as quantified by the inverse coda quality factor, increases with scattering anisotropy. For sufficiently thin layers

  6. Comment on ``Turbulent equipartition theory of toroidal momentum pinch'' [Phys. Plasmas 15, 055902 (2008)

    NASA Astrophysics Data System (ADS)

    Peeters, A. G.; Angioni, C.; Strintzi, D.

    2009-03-01

    The comment addresses questions raised on the derivation of the momentum pinch velocity due to the Coriolis drift effect [A. G. Peeters et al., Phys. Rev. Lett. 98, 265003 (2007)]. These concern the definition of the gradient, and the scaling with the density gradient length. It will be shown that the turbulent equipartition mechanism is included within the derivation using the Coriolis drift, with the density gradient scaling being the consequence of drift terms not considered in [T. S. Hahm et al., Phys. Plasmas 15, 055902 (2008)]. Finally the accuracy of the analytic models is assessed through a comparison with the full gyrokinetic solution.

  7. Tail resonances of Fermi-Pasta-Ulam q-breathers and their impact on the pathway to equipartition

    NASA Astrophysics Data System (ADS)

    Penati, Tiziano; Flach, Sergej

    2007-06-01

    Upon initial excitation of a few normal modes the energy distribution among all modes of a nonlinear atomic chain (the Fermi-Pasta-Ulam model) exhibits exponential localization on large time scales. At the same time, resonant anomalies (peaks) are observed in its weakly excited tail for long times preceding equipartition. We observe a similar resonant tail structure also for exact time-periodic Lyapunov orbits, coined q-breathers due to their exponential localization in modal space. We give a simple explanation for this structure in terms of superharmonic resonances. The resonance analysis agrees very well with numerical results and has predictive power. We extend a previously developed perturbation method, based essentially on a Poincaré-Lindstedt scheme, in order to account for these resonances, and in order to treat more general model cases, including truncated Toda potentials. Our results give a qualitative and semiquantitative account for the superharmonic resonances of q-breathers and natural packets.

  8. N-body modeling of globular clusters: detecting intermediate-mass black holes by non-equipartition in HST proper motions

    NASA Astrophysics Data System (ADS)

    Trenti, Michele

    2010-09-01

    Intermediate Mass Black Holes {IMBHs} are objects of considerable astrophysical significance. They have been invoked as possible remnants of Population III stars, precursors of supermassive black holes, sources of ultra-luminous X-ray emission, and emitters of gravitational waves. The centers of globular clusters, where they may have formed through runaway collapse of massive stars, may be our best chance of detecting them. HST studies of velocity dispersions have provided tentative evidence, but the measurements are difficult and the results have been disputed. It is thus important to explore and develop additional indicators of the presence of an IMBH in these systems. In a Cycle 16 theory project we focused on the fingerprints of an IMBH derived from HST photometry. We showed that an IMBH leads to a detectable quenching of mass segregation. Analysis of HST-ACS data for NGC 2298 validated the method, and ruled out an IMBH of more than 300 solar masses. We propose here to extend the search for IMBH signatures from photometry to kinematics. The velocity dispersion of stars in collisionally relaxed stellar systems such as globular clusters scales with main sequence mass as sigma m^alpha. A value alpha = -0.5 corresponds to equipartition. Mass-dependent kinematics can now be measured from HST proper motion studies {e.g., alpha = -0.21 for Omega Cen}. Preliminary analysis shows that the value of alpha can be used as indicator of the presence of an IMBH. In fact, the quenching of mass segregation is a result of the degree of equipartition that the system attains. However, detailed numerical simulations are required to quantify this. Therefore we propose {a} to carry out a new, larger set of realistic N-body simulations of star clusters with IMBHs, primordial binaries and stellar evolution to predict in detail the expected kinematic signatures and {b} to compare these predictions to datasets that are {becoming} available. Considerable HST resources have been invested in

  9. Radio Monitoring of the Tidal Disruption Event Swift J164449.3+573451. III. Late-time Jet Energetics and a Deviation from Equipartition

    NASA Astrophysics Data System (ADS)

    Eftekhari, T.; Berger, E.; Zauderer, B. A.; Margutti, R.; Alexander, K. D.

    2018-02-01

    We present continued radio and X-ray observations of the relativistic tidal disruption event Swift J164449.3+573451 extending to δt ≈ 2000 days after discovery. The radio data were obtained with the Very Large Array (VLA) as part of a long-term program to monitor the energy and dynamical evolution of the jet and to characterize the parsec-scale environment around a previously dormant supermassive black hole. We combine these data with Chandra observations and demonstrate that the X-ray emission following the sharp decline at δt ≈ 500 days is likely due to the forward shock. We constrain the synchrotron cooling frequency and the microphysical properties of the outflow for the first time. We find that the cooling frequency evolves through the optical/NIR band at δt ≈ 10–200 days, corresponding to ɛ B ≈ 10‑3, well below equipartition; the X-ray data demonstrate that this deviation from equipartition holds to at least δt ≈ 2000 days. We thus recalculate the physical properties of the jet over the lifetime of the event, no longer assuming equipartition. We find a total kinetic energy of E K ≈ 4 × 1051 erg and a transition to non-relativistic expansion on the timescale of our latest observations (700 days). The density profile is approximately R ‑3/2 at ≲0.3 pc and ≳0.7 pc, with a plateau at intermediate scales, characteristic of Bondi accretion. Based on its evolution thus far, we predict that Sw 1644+57 will be detectable at centimeter wavelengths for decades to centuries with existing and upcoming radio facilities. Similar off-axis events should be detectable to z ∼ 2, but with a slow evolution that may inhibit their recognition as transient events.

  10. Quantifying the interplay between gravity and magnetic field in molecular clouds - a possible multiscale energy equipartition in NGC 6334

    NASA Astrophysics Data System (ADS)

    Li, Guang-Xing; Burkert, Andreas

    2018-02-01

    The interplay between gravity, turbulence and the magnetic field determines the evolution of the molecular interstellar medium (ISM) and the formation of the stars. In spite of growing interests, there remains a lack of understanding of the importance of magnetic field over multiple scales. We derive the magnetic energy spectrum - a measure that constraints the multiscale distribution of the magnetic energy, and compare it with the gravitational energy spectrum derived in Li & Burkert. In our formalism, the gravitational energy spectrum is purely determined by the surface density probability density distribution (PDF), and the magnetic energy spectrum is determined by both the surface density PDF and the magnetic field-density relation. If regions have density PDFs close to P(Σ) ˜ Σ-2 and a universal magnetic field-density relation B ˜ ρ1/2, we expect a multiscale near equipartition between gravity and the magnetic fields. This equipartition is found to be true in NGC 6334, where estimates of magnetic fields over multiple scales (from 0.1 pc to a few parsec) are available. However, the current observations are still limited in sample size. In the future, it is necessary to obtain multiscale measurements of magnetic fields from different clouds with different surface density PDFs and apply our formalism to further study the gravity-magnetic field interplay.

  11. On the Foundation of Equipartition in Supernova Remnants

    NASA Astrophysics Data System (ADS)

    Urošević, Dejan; Pavlović, Marko Z.; Arbutina, Bojan

    2018-03-01

    A widely accepted paradigm is that equipartition (eqp) between the energy density of cosmic rays (CRs) and the energy density of the magnetic field cannot be sustained in supernova remnants (SNRs). However, our 3D hydrodynamic supercomputer simulations, coupled with a nonlinear diffusive shock acceleration model, provide evidence that eqp may be established at the end of the Sedov phase of evolution in which most SNRs spend the longest portions of their lives. We introduce the term “constant partition” for any constant ratio between the CR energy density and the energy density of the magnetic field in an SNR, while the term “equipartition” should be reserved for the case of approximately the same values of the energy density (also, it is constant partition in the order of magnitude) of ultra-relativistic electrons only (or CRs in total) and the energy density of the magnetic field. Our simulations suggest that this approximate constant partition exists in all but the youngest SNRs. We speculate that since evolved SNRs at the end of the Sedov phase of evolution can reach eqp between CRs and magnetic fields, they may be responsible for initializing this type of eqp in the interstellar medium. Additionally, we show that eqp between the electron component of CRs and the magnetic field may be used for calculating the magnetic field strength directly from observations of synchrotron emission from SNRs. The values of magnetic field strengths in SNRs given here are approximately 2.5 times lower than values calculated by Arbutina et al.

  12. Tsallis and Kaniadakis statistics from a point of view of the holographic equipartition law

    NASA Astrophysics Data System (ADS)

    Abreu, Everton M. C.; Ananias Neto, Jorge; Mendes, Albert C. R.; Bonilla, Alexander

    2018-02-01

    In this work, we have illustrated the difference between both Tsallis and Kaniadakis entropies through cosmological models obtained from the formalism proposed by Padmanabhan, which is called holographic equipartition law. Similarly to the formalism proposed by Komatsu, we have obtained an extra driving constant term in the Friedmann equation if we deform the Tsallis entropy by Kaniadakis' formalism. We have considered initially Tsallis entropy as the black-hole (BH) area entropy. This constant term may lead the universe to be in an accelerated or decelerated mode. On the other hand, if we start with the Kaniadakis entropy as the BH area entropy and then by modifying the Kappa expression by Tsallis' formalism, the same absolute value but with opposite sign is obtained. In an opposite limit, no driving inflation term of the early universe was derived from both deformations.

  13. Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide.

    PubMed

    Li, Wenjin

    2018-02-28

    Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.

  14. Relativistic equipartition via a massive damped sliding partition

    NASA Astrophysics Data System (ADS)

    Crawford, Frank S.

    1993-04-01

    A cylinder partitioned by a massive sliding slab undergoing nonrelativistic damped one-dimensional (1D) motion under bombardment from the left (i=1) and right (i=2) by particles having rest mass mi, speed vi, relativistic momentum (magnitude) pi, and (let c≡1) total energy Ei=(pi2+mi2)1/2 is considered herein. The damped slab of mass M transforms the system from its initial pi distributions (i=1,2) to a state, first, of pressure (P) equilibrium with P1=P2, but temperature T1≠T2, then, to P-T equilibrium with P1=P2 and T1=T2, given by the (1D) ``first moment'' equipartition relation (κ is Boltzmann's constant),

  15. Designing ROW Methods

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1996-01-01

    There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.

  16. Aircraft digital control design methods

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Parsons, E.; Tashker, M. G.

    1976-01-01

    Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.

  17. Comparative study of methods to calibrate the stiffness of a single-beam gradient-force optical tweezers over various laser trapping powers

    PubMed Central

    Sarshar, Mohammad; Wong, Winson T.; Anvari, Bahman

    2014-01-01

    Abstract. Optical tweezers have become an important instrument in force measurements associated with various physical, biological, and biophysical phenomena. Quantitative use of optical tweezers relies on accurate calibration of the stiffness of the optical trap. Using the same optical tweezers platform operating at 1064 nm and beads with two different diameters, we present a comparative study of viscous drag force, equipartition theorem, Boltzmann statistics, and power spectral density (PSD) as methods in calibrating the stiffness of a single beam gradient force optical trap at trapping laser powers in the range of 0.05 to 1.38 W at the focal plane. The equipartition theorem and Boltzmann statistic methods demonstrate a linear stiffness with trapping laser powers up to 355 mW, when used in conjunction with video position sensing means. The PSD of a trapped particle’s Brownian motion or measurements of the particle displacement against known viscous drag forces can be reliably used for stiffness calibration of an optical trap over a greater range of trapping laser powers. Viscous drag stiffness calibration method produces results relevant to applications where trapped particle undergoes large displacements, and at a given position sensing resolution, can be used for stiffness calibration at higher trapping laser powers than the PSD method. PMID:25375348

  18. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  19. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  20. A Design Research Study of a Curriculum and Diagnostic Assessment System for a Learning Trajectory on Equipartitioning

    ERIC Educational Resources Information Center

    Confrey, Jere; Maloney, Alan

    2015-01-01

    Design research studies provide significant opportunities to study new innovations and approaches and how they affect the forms of learning in complex classroom ecologies. This paper reports on a two-week long design research study with twelve 2nd through 4th graders using curricular materials and a tablet-based diagnostic assessment system, both…

  1. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  2. Culture, Interface Design, and Design Methods for Mobile Devices

    NASA Astrophysics Data System (ADS)

    Lee, Kun-Pyo

    Aesthetic differences and similarities among cultures are obviously one of the very important issues in cultural design. However, ever since products became knowledge-supporting tools, the visible elements of products have become more universal so that the invisible parts of products such as interface and interaction are getting more important. Therefore, the cultural design should be extended to the invisible elements of culture like people's conceptual models beyond material and phenomenal culture. This chapter aims to explain how we address the invisible cultural elements in interface design and design methods by exploring the users' cognitive styles and communication patterns in different cultures. Regarding cultural interface design, we examined users' conceptual models while interacting with mobile phone and website interfaces, and observed cultural difference in performing tasks and viewing patterns, which appeared to agree with cultural cognitive styles known as Holistic thoughts vs. Analytic thoughts. Regarding design methods for culture, we explored how to localize design methods such as focus group interview and generative session for specific cultural groups, and the results of comparative experiments revealed cultural difference on participants' behaviors and performance in each design method and led us to suggest how to conduct them in East Asian culture. Mobile Observation Analyzer and Wi-Pro, user research tools we invented to capture user behaviors and needs especially in their mobile context, were also introduced.

  3. Designing an experiment to measure cellular interaction forces

    NASA Astrophysics Data System (ADS)

    McAlinden, Niall; Glass, David G.; Millington, Owain R.; Wright, Amanda J.

    2013-09-01

    Optical trapping is a powerful tool in Life Science research and is becoming common place in many microscopy laboratories and facilities. The force applied by the laser beam on the trapped object can be accurately determined allowing any external forces acting on the trapped object to be deduced. We aim to design a series of experiments that use an optical trap to measure and quantify the interaction force between immune cells. In order to cause minimum perturbation to the sample we plan to directly trap T cells and remove the need to introduce exogenous beads to the sample. This poses a series of challenges and raises questions that need to be answered in order to design a set of effect end-point experiments. A typical cell is large compared to the beads normally trapped and highly non-uniform - can we reliably trap such objects and prevent them from rolling and re-orientating? In this paper we show how a spatial light modulator can produce a triple-spot trap, as opposed to a single-spot trap, giving complete control over the object's orientation and preventing it from rolling due, for example, to Brownian motion. To use an optical trap as a force transducer to measure an external force you must first have a reliably calibrated system. The optical trapping force is typically measured using either the theory of equipartition and observing the Brownian motion of the trapped object or using an escape force method, e.g. the viscous drag force method. In this paper we examine the relationship between force and displacement, as well as measuring the maximum displacement from equilibrium position before an object falls out of the trap, hence determining the conditions under which the different calibration methods should be applied.

  4. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  5. A flexible layout design method for passive micromixers.

    PubMed

    Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui

    2012-10-01

    This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer.

  6. Computer-Aided Drug Design Methods.

    PubMed

    Yu, Wenbo; MacKerell, Alexander D

    2017-01-01

    Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.

  7. General method for designing wave shape transformers.

    PubMed

    Ma, Hua; Qu, Shaobo; Xu, Zhuo; Wang, Jiafu

    2008-12-22

    An effective method for designing wave shape transformers (WSTs) is investigated by adopting the coordinate transformation theory. Following this method, the devices employed to transform electromagnetic (EM) wave fronts from one style with arbitrary shape and size to another style, can be designed. To verify this method, three examples in 2D spaces are also presented. Compared with the methods proposed in other literatures, this method offers the general procedure in designing WSTs, and thus is of great importance for the potential and practical applications possessed by such kinds of devices.

  8. Axisymmetric inlet minimum weight design method

    NASA Technical Reports Server (NTRS)

    Nadell, Shari-Beth

    1995-01-01

    An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.

  9. Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE)

    DTIC Science & Technology

    2005-04-01

    PA 15213-3890 Methodical Design of Software Architecture Using an Architecture Design Assistant (ArchE) Felix Bachmann and Mark Klein Software...DATES COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Methodical Design of Software Architecture Using an Architecture Design Assistant...important for architecture design – quality requirements and constraints are most important Here’s some evidence: If the only concern is

  10. Software Design Methods for Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  11. A new method named as Segment-Compound method of baffle design

    NASA Astrophysics Data System (ADS)

    Qin, Xing; Yang, Xiaoxu; Gao, Xin; Liu, Xishuang

    2017-02-01

    As the observation demand increased, the demand of the lens imaging quality rising. Segment- Compound baffle design method was proposed in this paper. Three traditional methods of baffle design they are characterized as Inside to Outside, Outside to Inside, and Mirror Symmetry. Through a transmission type of optical system, the four methods were used to design stray light suppression structure for it, respectively. Then, structures modeling simulation with Solidworks, CAXA, Tracepro, At last, point source transmittance (PST) curve lines were got to describe their performance. The result shows that the Segment- Compound method can inhibit stay light more effectively. Moreover, it is easy to active and without use special material.

  12. Game Methodology for Design Methods and Tools Selection

    ERIC Educational Resources Information Center

    Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois

    2014-01-01

    Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…

  13. Educating Instructional Designers: Different Methods for Different Outcomes.

    ERIC Educational Resources Information Center

    Rowland, Gordon; And Others

    1994-01-01

    Suggests new methods of teaching instructional design based on literature reviews of other design fields including engineering, architecture, interior design, media design, and medicine. Methods discussed include public presentations, visiting experts, competitions, artifacts, case studies, design studios, and internships and apprenticeships.…

  14. Relationships between the generalized functional method and other methods of nonimaging optical design.

    PubMed

    Bortz, John; Shatz, Narkis

    2011-04-01

    The recently developed generalized functional method provides a means of designing nonimaging concentrators and luminaires for use with extended sources and receivers. We explore the mathematical relationships between optical designs produced using the generalized functional method and edge-ray, aplanatic, and simultaneous multiple surface (SMS) designs. Edge-ray and dual-surface aplanatic designs are shown to be special cases of generalized functional designs. In addition, it is shown that dual-surface SMS designs are closely related to generalized functional designs and that certain computational advantages accrue when the two design methods are combined. A number of examples are provided. © 2011 Optical Society of America

  15. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  16. In silico methods for design of biological therapeutics.

    PubMed

    Roy, Ankit; Nair, Sanjana; Sen, Neeladri; Soni, Neelesh; Madhusudhan, M S

    2017-12-01

    It has been twenty years since the first rationally designed small molecule drug was introduced into the market. Since then, we have progressed from designing small molecules to designing biotherapeutics. This class of therapeutics includes designed proteins, peptides and nucleic acids that could more effectively combat drug resistance and even act in cases where the disease is caused because of a molecular deficiency. Computational methods are crucial in this design exercise and this review discusses the various elements of designing biotherapeutic proteins and peptides. Many of the techniques discussed here, such as the deterministic and stochastic design methods, are generally used in protein design. We have devoted special attention to the design of antibodies and vaccines. In addition to the methods for designing these molecules, we have included a comprehensive list of all biotherapeutics approved for clinical use. Also included is an overview of methods that predict the binding affinity, cell penetration ability, half-life, solubility, immunogenicity and toxicity of the designed therapeutics. Biotherapeutics are only going to grow in clinical importance and are set to herald a new generation of disease management and cure. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. The application of mixed methods designs to trauma research.

    PubMed

    Creswell, John W; Zhang, Wanqing

    2009-12-01

    Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.

  18. Reliability Methods for Shield Design Process

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.

    2002-01-01

    Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.

  19. An overview of very high level software design methods

    NASA Technical Reports Server (NTRS)

    Asdjodi, Maryam; Hooper, James W.

    1988-01-01

    Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.

  20. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  1. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  2. Understanding exoplanet populations with simulation-based methods

    NASA Astrophysics Data System (ADS)

    Morehead, Robert Charles

    The Kepler candidate catalog represents an unprecedented sample of exoplanet host stars. This dataset is ideal for probing the populations of exoplanet systems and exploring their architectures. Confirming transiting exoplanets candidates through traditional follow-up methods is challenging, especially for faint host stars. Most of Kepler's validated planets relied on statistical methods to separate true planets from false-positives. Multiple transiting planet systems (MTPS) have been previously shown to have low false-positive rates and over 850 planets in MTPSs have been statistically validated so far. We show that the period-normalized transit duration ratio (xi) offers additional information that can be used to establish the planetary nature of these systems. We briefly discuss the observed distribution of xi for the Q1-Q17 Kepler Candidate Search. We also use xi to develop a Bayesian statistical framework combined with Monte Carlo methods to determine which pairs of planet candidates in an MTPS are consistent with the planet hypothesis for a sample of 862 MTPSs that include candidate planets, confirmed planets, and known false-positives. This analysis proves to be efficient and advantageous in that it only requires catalog-level bulk candidate properties and galactic population modeling to compute the probabilities of a myriad of feasible scenarios composed of background and companion stellar blends in the photometric aperture, without needing additional observational follow-up. Our results agree with the previous results of a low false-positive rate in the Kepler MTPSs. This implies, independently of any other estimates, that most of the MTPSs detected by Kepler are planetary in nature, but that a substantial fraction could be orbiting stars other than then the putative target star, and therefore may be subject to significant error in the inferred planet parameters resulting from unknown or mismeasured stellar host attributes. We also apply approximate

  3. Comparison of several asphalt design methods.

    DOT National Transportation Integrated Search

    1998-01-01

    This laboratory study compared several methods of selecting the optimum asphalt content of surface mixes. Six surface mixes were tested using the 50-blow Marshall design, the 75-blow Marshall design, two brands of SHRP gyratory compactors, and the U....

  4. Model reduction methods for control design

    NASA Technical Reports Server (NTRS)

    Dunipace, K. R.

    1988-01-01

    Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.

  5. A Method for Designing Conforming Folding Propellers

    NASA Technical Reports Server (NTRS)

    Litherland, Brandon L.; Patterson, Michael D.; Derlaga, Joseph M.; Borer, Nicholas K.

    2017-01-01

    As the aviation vehicle design environment expands due to the in flux of new technologies, new methods of conceptual design and modeling are required in order to meet the customer's needs. In the case of distributed electric propulsion (DEP), the use of high-lift propellers upstream of the wing leading edge augments lift at low speeds enabling smaller wings with sufficient takeoff and landing performance. During cruise, however, these devices would normally contribute significant drag if left in a fixed or windmilling arrangement. Therefore, a design that stows the propeller blades is desirable. In this paper, we present a method for designing folding-blade configurations that conform to the nacelle surface when stowed. These folded designs maintain performance nearly identical to their straight, non-folding blade counterparts.

  6. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  7. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  8. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  9. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  10. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  11. Investigating the Use of Design Methods by Capstone Design Students at Clemson University

    ERIC Educational Resources Information Center

    Miller, W. Stuart; Summers, Joshua D.

    2013-01-01

    The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…

  12. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  13. Acoustic Treatment Design Scaling Methods. Phase 2

    NASA Technical Reports Server (NTRS)

    Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.

    2003-01-01

    The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.

  14. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  15. A rapid method for soil cement design : Louisiana slope value method.

    DOT National Transportation Integrated Search

    1964-03-01

    The current procedure used by the Louisiana Department of Highways for laboratory design of cement stabilized soil base and subbase courses is taken from standard AASHO test methods, patterned after Portland Cement Association criteria. These methods...

  16. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  17. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  18. How to Construct a Mixed Methods Research Design.

    PubMed

    Schoonenboom, Judith; Johnson, R Burke

    2017-01-01

    This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

  19. Project Lifespan-based Nonstationary Hydrologic Design Methods for Changing Environment

    NASA Astrophysics Data System (ADS)

    Xiong, L.

    2017-12-01

    Under changing environment, we must associate design floods with the design life period of projects to ensure the hydrologic design is really relevant to the operation of the hydrologic projects, because the design value for a given exceedance probability over the project life period would be significantly different from that over other time periods of the same length due to the nonstationarity of probability distributions. Several hydrologic design methods that take the design life period of projects into account have been proposed in recent years, i.e. the expected number of exceedances (ENE), design life level (DLL), equivalent reliability (ER), and average design life level (ADLL). Among the four methods to be compared, both the ENE and ER methods are return period-based methods, while DLL and ADLL are risk/reliability- based methods which estimate design values for given probability values of risk or reliability. However, the four methods can be unified together under a general framework through a relationship transforming the so-called representative reliability (RRE) into the return period, i.e. m=1/1(1-RRE), in which we compute the return period m using the representative reliability RRE.The results of nonstationary design quantiles and associated confidence intervals calculated by ENE, ER and ADLL were very similar, since ENE or ER was a special case or had a similar expression form with respect to ADLL. In particular, the design quantiles calculated by ENE and ADLL were the same when return period was equal to the length of the design life. In addition, DLL can yield similar design values if the relationship between DLL and ER/ADLL return periods is considered. Furthermore, ENE, ER and ADLL had good adaptability to either an increasing or decreasing situation, yielding not too large or too small design quantiles. This is important for applications of nonstationary hydrologic design methods in actual practice because of the concern of choosing the emerging

  20. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  1. Methods for Estimating Payload/Vehicle Design Loads

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Garba, J. A.; Salama, M. A.; Trubert, M. R.

    1983-01-01

    Several methods compared with respect to accuracy, design conservatism, and cost. Objective of survey: reduce time and expense of load calculation by selecting approximate method having sufficient accuracy for problem at hand. Methods generally applicable to dynamic load analysis in other aerospace and other vehicle/payload systems.

  2. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  3. Design of large Francis turbine using optimal methods

    NASA Astrophysics Data System (ADS)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  4. Innovative design method of automobile profile based on Fourier descriptor

    NASA Astrophysics Data System (ADS)

    Gao, Shuyong; Fu, Chaoxing; Xia, Fan; Shen, Wei

    2017-10-01

    Aiming at the innovation of the contours of automobile side, this paper presents an innovative design method of vehicle side profile based on Fourier descriptor. The design flow of this design method is: pre-processing, coordinate extraction, standardization, discrete Fourier transform, simplified Fourier descriptor, exchange descriptor innovation, inverse Fourier transform to get the outline of innovative design. Innovative concepts of the innovative methods of gene exchange among species and the innovative methods of gene exchange among different species are presented, and the contours of the innovative design are obtained separately. A three-dimensional model of a car is obtained by referring to the profile curve which is obtained by exchanging xenogeneic genes. The feasibility of the method proposed in this paper is verified by various aspects.

  5. New reversing design method for LED uniform illumination.

    PubMed

    Wang, Kai; Wu, Dan; Qin, Zong; Chen, Fei; Luo, Xiaobing; Liu, Sheng

    2011-07-04

    In light-emitting diode (LED) applications, it is becoming a big issue that how to optimize light intensity distribution curve (LIDC) and design corresponding optical component to achieve uniform illumination when distance-height ratio (DHR) is given. A new reversing design method is proposed to solve this problem, including design and optimization of LIDC to achieve high uniform illumination and a new algorithm of freeform lens to generate the required LIDC by LED light source. According to this method, two new LED modules integrated with freeform lenses are successfully designed for slim direct-lit LED backlighting with thickness of 10mm, and uniformities of illuminance increase from 0.446 to 0.915 and from 0.155 to 0.887 when DHRs are 2 and 3 respectively. Moreover, the number of new LED modules dramatically decreases to 1/9 of the traditional LED modules while achieving similar uniform illumination in backlighting. Therefore, this new method provides a practical and simple way for optical design of LED uniform illumination when DHR is much larger than 1.

  6. The synthesis method for design of electron flow sources

    NASA Astrophysics Data System (ADS)

    Alexahin, Yu I.; Molodozhenzev, A. Yu

    1997-01-01

    The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.

  7. Standardized Radiation Shield Design Methods: 2005 HZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.

    2006-01-01

    Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.

  8. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  9. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and

  10. Designs of Empirical Evaluations of Nonexperimental Methods in Field Settings.

    PubMed

    Wong, Vivian C; Steiner, Peter M

    2018-01-01

    Over the last three decades, a research design has emerged to evaluate the performance of nonexperimental (NE) designs and design features in field settings. It is called the within-study comparison (WSC) approach or the design replication study. In the traditional WSC design, treatment effects from a randomized experiment are compared to those produced by an NE approach that shares the same target population. The nonexperiment may be a quasi-experimental design, such as a regression-discontinuity or an interrupted time-series design, or an observational study approach that includes matching methods, standard regression adjustments, and difference-in-differences methods. The goals of the WSC are to determine whether the nonexperiment can replicate results from a randomized experiment (which provides the causal benchmark estimate), and the contexts and conditions under which these methods work in practice. This article presents a coherent theory of the design and implementation of WSCs for evaluating NE methods. It introduces and identifies the multiple purposes of WSCs, required design components, common threats to validity, design variants, and causal estimands of interest in WSCs. It highlights two general approaches for empirical evaluations of methods in field settings, WSC designs with independent and dependent benchmark and NE arms. This article highlights advantages and disadvantages for each approach, and conditions and contexts under which each approach is optimal for addressing methodological questions.

  11. Mixed Methods Research Designs in Counseling Psychology

    ERIC Educational Resources Information Center

    Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.

    2005-01-01

    With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…

  12. Demystifying Mixed Methods Research Design: A Review of the Literature

    ERIC Educational Resources Information Center

    Caruth, Gail D.

    2013-01-01

    Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…

  13. A novel method for multifactorial bio-chemical experiments design based on combinational design theory.

    PubMed

    Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan

    2017-01-01

    Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.

  14. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  15. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  16. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  17. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...

  18. 14 CFR 161.9 - Designation of noise description methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...

  19. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...

  20. Simplified Design Method for Tension Fasteners

    NASA Astrophysics Data System (ADS)

    Olmstead, Jim; Barker, Paul; Vandersluis, Jonathan

    2012-07-01

    Tension fastened joints design has traditionally been an iterative tradeoff between separation and strength requirements. This paper presents equations for the maximum external load that a fastened joint can support and the optimal preload to achieve this load. The equations, based on linear joint theory, account for separation and strength safety factors and variations in joint geometry, materials, preload, load-plane factor and thermal loading. The strength-normalized versions of the equations are applicable to any fastener and can be plotted to create a "Fastener Design Space", FDS. Any combination of preload and tension that falls within the FDS represents a safe joint design. The equation for the FDS apex represents the optimal preload and load capacity of a set of joints. The method can be used for preliminary design or to evaluate multiple pre-existing joints.

  1. Control system design method

    DOEpatents

    Wilson, David G [Tijeras, NM; Robinett, III, Rush D.

    2012-02-21

    A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.

  2. Prevalence of Mixed-Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.

    2006-01-01

    The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…

  3. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  4. Simple design of slanted grating with simplified modal method.

    PubMed

    Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun

    2014-02-15

    A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.

  5. Addressing Research Design Problem in Mixed Methods Research

    NASA Astrophysics Data System (ADS)

    Alavi, Hamed; Hąbek, Patrycja

    2016-03-01

    Alongside other disciplines in social sciences, management researchers use mixed methods research more and more in conduct of their scientific investigations. Mixed methods approach can also be used in the field of production engineering. In comparison with traditional quantitative and qualitative research methods, reasons behind increasing popularity of mixed research method in management science can be traced in different factors. First of all, any particular discipline in management can be theoretically related to it. Second is that concurrent approach of mixed research method to inductive and deductive research logic provides researchers with opportunity to generate theory and test hypothesis in one study simultaneously. In addition, it provides a better justification for chosen method of investigation and higher validity for obtained answers to research questions. Despite increasing popularity of mixed research methods among management scholars, there is still need for a comprehensive approach to research design typology and process in mixed research method from the perspective of management science. The authors in this paper try to explain fundamental principles of mixed research method, its typology and different steps in its design process.

  6. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  7. Waterflooding injectate design systems and methods

    DOEpatents

    Brady, Patrick V.; Krumhansl, James L.

    2014-08-19

    A method of designing an injectate to be used in a waterflooding operation is disclosed. One aspect includes specifying data representative of chemical characteristics of a liquid hydrocarbon, a connate, and a reservoir rock, of a subterranean reservoir. Charged species at an interface of the liquid hydrocarbon are determined based on the specified data by evaluating at least one chemical reaction. Charged species at an interface of the reservoir rock are determined based on the specified data by evaluating at least one chemical reaction. An extent of surface complexation between the charged species at the interfaces of the liquid hydrocarbon and the reservoir rock is determined by evaluating at least one surface complexation reaction. The injectate is designed and is operable to decrease the extent of surface complexation between the charged species at interfaces of the liquid hydrocarbon and the reservoir rock. Other methods, apparatus, and systems are disclosed.

  8. MAST Propellant and Delivery System Design Methods

    NASA Technical Reports Server (NTRS)

    Nadeem, Uzair; Mc Cleskey, Carey M.

    2015-01-01

    A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.

  9. Creative design inspired by biological knowledge: Technologies and methods

    NASA Astrophysics Data System (ADS)

    Tan, Runhua; Liu, Wei; Cao, Guozhong; Shi, Yuan

    2018-05-01

    Biological knowledge is becoming an important source of inspiration for developing creative solutions to engineering design problems and even has a huge potential in formulating ideas that can help firms compete successfully in a dynamic market. To identify the technologies and methods that can facilitate the development of biologically inspired creative designs, this research briefly reviews the existing biological-knowledge-based theories and methods and examines the application of biological-knowledge-inspired designs in various fields. Afterward, this research thoroughly examines the four dimensions of key technologies that underlie the biologically inspired design (BID) process. This research then discusses the future development trends of the BID process before presenting the conclusions.

  10. Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method

    NASA Technical Reports Server (NTRS)

    Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.

    2003-01-01

    A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.

  11. Interactive design optimization of magnetorheological-brake actuators using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Erol, Ozan; Gurocak, Hakan

    2011-10-01

    This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.

  12. Mixed methods research design for pragmatic psychoanalytic studies.

    PubMed

    Tillman, Jane G; Clemence, A Jill; Stevens, Jennifer L

    2011-10-01

    Calls for more rigorous psychoanalytic studies have increased over the past decade. The field has been divided by those who assert that psychoanalysis is properly a hermeneutic endeavor and those who see it as a science. A comparable debate is found in research methodology, where qualitative and quantitative methods have often been seen as occupying orthogonal positions. Recently, Mixed Methods Research (MMR) has emerged as a viable "third community" of research, pursuing a pragmatic approach to research endeavors through integrating qualitative and quantitative procedures in a single study design. Mixed Methods Research designs and the terminology associated with this emerging approach are explained, after which the methodology is explored as a potential integrative approach to a psychoanalytic human science. Both qualitative and quantitative research methods are reviewed, as well as how they may be used in Mixed Methods Research to study complex human phenomena.

  13. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  14. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  15. Applications of numerical optimization methods to helicopter design problems: A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  16. Applications of numerical optimization methods to helicopter design problems - A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1985-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  17. Applications of numerical optimization methods to helicopter design problems - A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  18. [Review of research design and statistical methods in Chinese Journal of Cardiology].

    PubMed

    Zhang, Li-jun; Yu, Jin-ming

    2009-07-01

    To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.

  19. Methods for structural design at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Ellison, A. M.; Jones, W. E., Jr.; Leimbach, K. R.

    1973-01-01

    A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range.

  20. Starting geometry creation and design method for freeform optics.

    PubMed

    Bauer, Aaron; Schiesser, Eric M; Rolland, Jannick P

    2018-05-01

    We describe a method for designing freeform optics based on the aberration theory of freeform surfaces that guides the development of a taxonomy of starting-point geometries with an emphasis on manufacturability. An unconventional approach to the optimization of these starting designs wherein the rotationally invariant 3rd-order aberrations are left uncorrected prior to unobscuring the system is shown to be effective. The optimal starting-point geometry is created for an F/3, 200 mm aperture-class three-mirror imager and is fully optimized using a novel step-by-step method over a 4 × 4 degree field-of-view to exemplify the design method. We then optimize an alternative starting-point geometry that is common in the literature but was quantified here as a sub-optimal candidate for optimization with freeform surfaces. A comparison of the optimized geometries shows the performance of the optimal geometry is at least 16× better, which underscores the importance of the geometry when designing freeform optics.

  1. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  2. Overlay design method based on visual pavement distress.

    DOT National Transportation Integrated Search

    1978-01-01

    A method for designing the thickness of overlays for bituminous concrete pavements in Virginia is described. In this method the thickness is calculated by rating the amount and severity of observed pavement distress and determining the total accumula...

  3. Using mixed methods effectively in prevention science: designs, procedures, and examples.

    PubMed

    Zhang, Wanqing; Watanabe-Galloway, Shinobu

    2014-10-01

    There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.

  4. Computer method for design of acoustic liners for turbofan engines

    NASA Technical Reports Server (NTRS)

    Minner, G. L.; Rice, E. J.

    1976-01-01

    A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.

  5. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  6. A systematic composite service design modeling method using graph-based theory.

    PubMed

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  7. A Systematic Composite Service Design Modeling Method Using Graph-Based Theory

    PubMed Central

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358

  8. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  9. Layer-by-layer design method for soft-X-ray multilayers

    NASA Technical Reports Server (NTRS)

    Yamamoto, Masaki; Namioka, Takeshi

    1992-01-01

    A new design method effective for a nontransparent system has been developed for soft-X-ray multilayers with the aid of graphic representation of the complex amplitude reflectance in a Gaussian plane. The method provides an effective means of attaining the absolute maximum reflectance on a layer-by-layer basis and also gives clear insight into the evolution of the amplitude reflectance on a multilayer as it builds up. An optical criterion is derived for the selection of a proper pair of materials needed for designing a high-reflectance multilayer. Some examples are given to illustrate the usefulness of this design method.

  10. New method for designing serial resonant power converters

    NASA Astrophysics Data System (ADS)

    Hinov, Nikolay

    2017-12-01

    In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.

  11. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  12. Free-form surface design method for a collimator TIR lens.

    PubMed

    Tsai, Chung-Yu

    2016-04-01

    A free-form (FF) surface design method is proposed for a general axial-symmetrical collimator system consisting of a light source and a total internal reflection lens with two coupled FF boundary surfaces. The profiles of the boundary surfaces are designed using a FF surface construction method such that each incident ray is directed (refracted and reflected) in such a way as to form a specified image pattern on the target plane. The light ray paths within the system are analyzed using an exact analytical model and a skew-ray tracing approach. In addition, the validity of the proposed FF design method is demonstrated by means of ZEMAX simulations. It is shown that the illumination distribution formed on the target plane is in good agreement with that specified by the user. The proposed surface construction method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis of general axial-symmetrical optical systems.

  13. Application of the CSCM method to the design of wedge cavities. [Conservative Supra Characteristic Method

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.

    1987-01-01

    This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.

  14. A decentralized linear quadratic control design method for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1990-01-01

    A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass

  15. A novel method for inverse fiber Bragg grating structure design

    NASA Astrophysics Data System (ADS)

    Yin, Yu-zhe; Chen, Xiang-fei; Dai, Yi-tang; Xie, Shi-zhong

    2003-12-01

    A novel grating inverse design method is proposed in this paper, which is direct in physical meaning and easy to accomplish. The key point of the method is design and implement desired spectra response in grating strength modulation domain, while not in grating period chirp domain. Simulated results are in good coincidence with design target. By transforming grating period chirp to grating strength modulation, a novel grating with opposite dispersion characters is proposed.

  16. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  17. Application of Taguchi methods to infrared window design

    NASA Astrophysics Data System (ADS)

    Osmer, Kurt A.; Pruszynski, Charles J.

    1990-10-01

    Dr. Genichi Taguchi, a prominent quality consultant, reduced a branch of statistics known as "Design of Experiments" to a cookbook methodology that can be employed by any competent engineer. This technique has been extensively employed by Japanese manufacturers, and is widely credited with helping them attain their current level of success in low cost, high quality product design and fabrication. Although this technique was originally put forth as a tool to streamline the determination of improved production processes, it can also be applied to a wide range of engineering problems. As part of an internal research project, this method of experimental design has been adapted to window trade studies and materials research. Two of these analyses are presented herein, and have been chosen to illustrate the breadth of applications to which the Taguchi method can be utilized.

  18. Financial methods for waterflooding injectate design

    DOEpatents

    Heneman, Helmuth J.; Brady, Patrick V.

    2017-08-08

    A method of selecting an injectate for recovering liquid hydrocarbons from a reservoir includes designing a plurality of injectates, calculating a net present value of each injectate, and selecting a candidate injectate based on the net present value. For example, the candidate injectate may be selected to maximize the net present value of a waterflooding operation.

  19. A Comparison of Fatigue Design Methods

    DTIC Science & Technology

    2001-04-05

    Boiler and Pressure Vessel Code does not...Engineers, "ASME Boiler and Pressure Vessel Code ," ASME, 3 Park Ave., New York, NY 10016-5990. [4] Langer, B. F., "Design of Pressure Vessels Involving... and Pressure Vessel Code [3] presents these methods and has expanded the procedures to other pressure vessels besides nuclear pressure vessels. B.

  20. A method for the design of transonic flexible wings

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1990-01-01

    Methodology was developed for designing airfoils and wings at transonic speeds which includes a technique that can account for static aeroelastic deflections. This procedure is capable of designing either supercritical or more conventional airfoil sections. Methods for including viscous effects are also illustrated and are shown to give accurate results. The methodology developed is an interactive system containing three major parts. A design module was developed which modifies airfoil sections to achieve a desired pressure distribution. This design module works in conjunction with an aerodynamic analysis module, which for this study is a small perturbation transonic flow code. Additionally, an aeroelastic module is included which determines the wing deformation due to the calculated aerodynamic loads. Because of the modular nature of the method, it can be easily coupled with any aerodynamic analysis code.

  1. Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.

    PubMed

    Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo

    2016-07-01

    During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Function combined method for design innovation of children's bike

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan

    2013-03-01

    As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.

  3. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  4. A Method for the Constrained Design of Natural Laminar Flow Airfoils

    NASA Technical Reports Server (NTRS)

    Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.

    1996-01-01

    A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.

  5. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of

  6. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  7. Kinoform design with an optimal-rotation-angle method.

    PubMed

    Bengtsson, J

    1994-10-10

    Kinoforms (i.e., computer-generated phase holograms) are designed with a new algorithm, the optimalrotation- angle method, in the paraxial domain. This is a direct Fourier method (i.e., no inverse transform is performed) in which the height of the kinoform relief in each discrete point is chosen so that the diffraction efficiency is increased. The optimal-rotation-angle algorithm has a straightforward geometrical interpretation. It yields excellent results close to, or better than, those obtained with other state-of-the-art methods. The optimal-rotation-angle algorithm can easily be modified to take different restraints into account; as an example, phase-swing-restricted kinoforms, which distribute the light into a number of equally bright spots (so called fan-outs), were designed. The phase-swing restriction lowers the efficiency, but the uniformity can still be made almost perfect.

  8. The research progress on Hodograph Method of aerodynamic design at Tsinghua University

    NASA Technical Reports Server (NTRS)

    Chen, Zuoyi; Guo, Jingrong

    1991-01-01

    Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.

  9. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  10. Designs and methods used in published Australian health promotion evaluations 1992-2011.

    PubMed

    Chambers, Alana Hulme; Murphy, Kylie; Kolbe, Anthony

    2015-06-01

    To describe the designs and methods used in published Australian health promotion evaluation articles between 1992 and 2011. Using a content analysis approach, we reviewed 157 articles to analyse patterns and trends in designs and methods in Australian health promotion evaluation articles. The purpose was to provide empirical evidence about the types of designs and methods used. The most common type of evaluation conducted was impact evaluation. Quantitative designs were used exclusively in more than half of the articles analysed. Almost half the evaluations utilised only one data collection method. Surveys were the most common data collection method used. Few articles referred explicitly to an intended evaluation outcome or benefit and references to published evaluation models or frameworks were rare. This is the first time Australian-published health promotion evaluation articles have been empirically investigated in relation to designs and methods. There appears to be little change in the purposes, overall designs and methods of published evaluations since 1992. More methodologically transparent and sophisticated published evaluation articles might be instructional, and even motivational, for improving evaluation practice and result in better public health interventions and outcomes. © 2015 Public Health Association of Australia.

  11. Turbine blade profile design method based on Bezier curves

    NASA Astrophysics Data System (ADS)

    Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.

    2017-11-01

    In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.

  12. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  13. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  14. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS).

    PubMed

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M; Khan, Ajmal

    2016-01-01

    This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher's exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher's exact test, logistic regression, epidemiological statistics, and non-parametric tests. This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design.

  15. Test methods and design allowables for fibrous composites. Volume 2

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C. (Editor)

    1989-01-01

    Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.

  16. Design of Education Methods in a Virtual Environment

    ERIC Educational Resources Information Center

    Yavich, Roman; Starichenko, Boris

    2017-01-01

    The purpose of the presented article is to review existing approaches to modern training methods design and to create a variant of its technology in virtual educational environments in order to develop general cultural and professional students' competence in pedagogical education. The conceptual modeling of a set of methods for students' training…

  17. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)

    PubMed Central

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal

    2016-01-01

    Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365

  18. Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues

    ERIC Educational Resources Information Center

    Lieber, Eli

    2009-01-01

    This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…

  19. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  20. A Simple Method for High-Lift Propeller Conceptual Design

    NASA Technical Reports Server (NTRS)

    Patterson, Michael; Borer, Nick; German, Brian

    2016-01-01

    In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.

  1. A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits

    NASA Astrophysics Data System (ADS)

    Moradi, Behzad; Mirzaei, Abdolreza

    2016-11-01

    A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.

  2. Design method of large-diameter rock-socketed pile with steel casing

    NASA Astrophysics Data System (ADS)

    Liu, Ming-wei; Fang, Fang; Liang, Yue

    2018-02-01

    There is a lack of the design and calculation method of large-diameter rock-socketed pile with steel casing. Combined with the “twelfth five-year plan” of the National Science & Technology Pillar Program of China about “Key technologies on the ports and wharfs constructions of the mountain canalization channels”, this paper put forward the structured design requirements of concrete, steel bar distribution and steel casing, and a checking calculation method of the bearing capacity of the normal section of the pile and the maximum crack width at the bottom of the steel casing. The design method will have some degree of guiding significance for the design of large-diameter rock-socketed pile with steel casing.

  3. Category's analysis and operational project capacity method of transformation in design

    NASA Astrophysics Data System (ADS)

    Obednina, S. V.; Bystrova, T. Y.

    2015-10-01

    The method of transformation is attracting widespread interest in fields such contemporary design. However, in theory of design little attention has been paid to a categorical status of the term "transformation". This paper presents the conceptual analysis of transformation based on the theory of form employed in the influential essays by Aristotle and Thomas Aquinas. In the present work the transformation as a method of shaping design has been explored as well as potential application of this term in design has been demonstrated.

  4. General design method for three-dimensional potential flow fields. 1: Theory

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1980-01-01

    A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.

  5. Expanding color design methods for architecture and allied disciplines

    NASA Astrophysics Data System (ADS)

    Linton, Harold E.

    2002-06-01

    The color design processes of visual artists, architects, designers, and theoreticians included in this presentation reflect the practical role of color in architecture. What the color design professional brings to the architectural design team is an expertise and rich sensibility made up of a broad awareness and a finely tuned visual perception. This includes a knowledge of design and its history, expertise with industrial color materials and their methods of application, an awareness of design context and cultural identity, a background in physiology and psychology as it relates to human welfare, and an ability to problem-solve and respond creatively to design concepts with innovative ideas. The broadening of the definition of the colorists's role in architectural design provides architects, artists and designers with significant opportunities for continued professional and educational development.

  6. New Methods for Design and Computation of Freeform Optics

    DTIC Science & Technology

    2015-07-09

    338, Springer-Verlag Berlin Heidelberg, 2009. [18] R. Winston , J. C. Miñano, and P. Beńıtez, with contributions by N. Shatz and J. Bortz, Nonimaging Optics , Elsevier Academic Press, Amsterdam, 2005. 8 ...AFRL-OSR-VA-TR-2015-0160 New Methods for Design and Computation of Free-form Optics Vladimir Oliker EMORY UNIVERSITY Final Report 07/09/2015...Include area code) 01-07-2015 Final Technical Report May 01, 2012 - April 30, 2015 New Methods for Design and Computation of Freeform Optics FA9550-12--1

  7. A comparison of methods to estimate future sub-daily design rainfall

    NASA Astrophysics Data System (ADS)

    Li, J.; Johnson, F.; Evans, J.; Sharma, A.

    2017-12-01

    Warmer temperatures are expected to increase extreme short-duration rainfall due to the increased moisture-holding capacity of the atmosphere. While attention has been paid to the impacts of climate change on future design rainfalls at daily or longer time scales, the potential changes in short duration design rainfalls have been often overlooked due to the limited availability of sub-daily projections and observations. This study uses a high-resolution regional climate model (RCM) to predict the changes in sub-daily design rainfalls for the Greater Sydney region in Australia. Sixteen methods for predicting changes to sub-daily future extremes are assessed based on different options for bias correction, disaggregation and frequency analysis. A Monte Carlo cross-validation procedure is employed to evaluate the skill of each method in estimating the design rainfall for the current climate. It is found that bias correction significantly improves the accuracy of the design rainfall estimated for the current climate. For 1 h events, bias correcting the hourly annual maximum rainfall simulated by the RCM produces design rainfall closest to observations, whereas for multi-hour events, disaggregating the daily rainfall total is recommended. This suggests that the RCM fails to simulate the observed multi-duration rainfall persistence, which is a common issue for most climate models. Despite the significant differences in the estimated design rainfalls between different methods, all methods lead to an increase in design rainfalls across the majority of the study region.

  8. Methods for designing interventions to change healthcare professionals' behaviour: a systematic review.

    PubMed

    Colquhoun, Heather L; Squires, Janet E; Kolehmainen, Niina; Fraser, Cynthia; Grimshaw, Jeremy M

    2017-03-04

    Systematic reviews consistently indicate that interventions to change healthcare professional (HCP) behaviour are haphazardly designed and poorly specified. Clarity about methods for designing and specifying interventions is needed. The objective of this review was to identify published methods for designing interventions to change HCP behaviour. A search of MEDLINE, Embase, and PsycINFO was conducted from 1996 to April 2015. Using inclusion/exclusion criteria, a broad screen of abstracts by one rater was followed by a strict screen of full text for all potentially relevant papers by three raters. An inductive approach was first applied to the included studies to identify commonalities and differences between the descriptions of methods across the papers. Based on this process and knowledge of related literatures, we developed a data extraction framework that included, e.g. level of change (e.g. individual versus organization); context of development; a brief description of the method; tasks included in the method (e.g. barrier identification, component selection, use of theory). 3966 titles and abstracts and 64 full-text papers were screened to yield 15 papers included in the review, each outlining one design method. All of the papers reported methods developed within a specific context. Thirteen papers included barrier identification and 13 included linking barriers to intervention components; although not the same 13 papers. Thirteen papers targeted individual HCPs with only one paper targeting change across individual, organization, and system levels. The use of theory and user engagement were included in 13/15 and 13/15 papers, respectively. There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users. Methods also consist of further additional tasks. Examples of methods for designing the organisation and

  9. Development of probabilistic design method for annular fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozawa, Takayuki

    2007-07-01

    The increase of linear power and burn-up during the reactor operation is considered as one measure to ensure the utility of fast reactors in the future; for this the application of annular oxide fuels is under consideration. The annular fuel design code CEPTAR was developed in the Japan Atomic Energy Agency (JAEA) and verified by using many irradiation experiences with oxide fuels. In addition, the probabilistic fuel design code BORNFREE was also developed to provide a safe and reasonable fuel design and to evaluate the design margins quantitatively. This study aimed at the development of a probabilistic design method formore » annular oxide fuels; this was implemented in the developed BORNFREE-CEPTAR code, and the code was used to make a probabilistic evaluation with regard to the permissive linear power. (author)« less

  10. Fast correlation method for passive-solar design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wray, W.O.; Biehl, F.A.; Kosiewicz, C.E.

    1982-01-01

    A passive-solar design manual for single-family detached residences and dormitory-type buildings is being developed. The design procedure employed in the manual is a simplification of the original monthly solar load ratio (SLR) method. The new SLR correlations involve a single constant for each system. The correlation constant appears as a scale factor permitting the use of a universal performance curve for all passive systems. Furthermore, by providing location-dependent correlations between the annual solar heating fraction (SHF) and the minimum monthly SHF, we have eliminated the need to perform an SLR calculation for each month of the heating season.

  11. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  12. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  13. Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Spector, J. Michael

    ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…

  14. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  15. Mixed methods research: a design for emergency care research?

    PubMed

    Cooper, Simon; Porter, Jo; Endacott, Ruth

    2011-08-01

    This paper follows previous publications on generic qualitative approaches, qualitative designs and action research in emergency care by this group of authors. Contemporary views on mixed methods approaches are considered, with a particular focus on the design choice and the amalgamation of qualitative and quantitative data emphasising the timing of data collection for each approach, their relative 'weight' and how they will be mixed. Mixed methods studies in emergency care are reviewed before the variety of methodological approaches and best practice considerations are presented. The use of mixed methods in clinical studies is increasing, aiming to answer questions such as 'how many' and 'why' in the same study, and as such are an important and useful approach to many key questions in emergency care.

  16. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  17. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    NASA Astrophysics Data System (ADS)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  18. Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment

    NASA Technical Reports Server (NTRS)

    Rowell, Lawrence F.; Korte, John J.

    2003-01-01

    NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.

  19. Invisibility problem in acoustics, electromagnetism and heat transfer. Inverse design method

    NASA Astrophysics Data System (ADS)

    Alekseev, G.; Tokhtina, A.; Soboleva, O.

    2017-10-01

    Two approaches (direct design and inverse design methods) for solving problems of designing devices providing invisibility of material bodies of detection using different physical fields - electromagnetic, acoustic and static are discussed. The second method is applied for solving problems of designing cloaking devices for the 3D stationary thermal scattering model. Based on this method the design problems under study are reduced to respective control problems. The material parameters (radial and tangential heat conductivities) of the inhomogeneous anisotropic medium filling the thermal cloak and the density of auxiliary heat sources play the role of controls. A unique solvability of direct thermal scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and numerical algorithms for solving particular thermal cloaking problem are proposed.

  20. Novel TMS coils designed using an inverse boundary element method

    NASA Astrophysics Data System (ADS)

    Cobos Sánchez, Clemente; María Guerrero Rodriguez, Jose; Quirós Olozábal, Ángel; Blanco-Navarro, David

    2017-01-01

    In this work, a new method to design TMS coils is presented. It is based on the inclusion of the concept of stream function of a quasi-static electric current into a boundary element method. The proposed TMS coil design approach is a powerful technique to produce stimulators of arbitrary shape, and remarkably versatile as it permits the prototyping of many different performance requirements and constraints. To illustrate the power of this approach, it has been used for the design of TMS coils wound on rectangular flat, spherical and hemispherical surfaces, subjected to different constraints, such as minimum stored magnetic energy or power dissipation. The performances of such coils have been additionally described; and the torque experienced by each stimulator in the presence of a main magnetic static field have theoretically found in order to study the prospect of using them to perform TMS and fMRI concurrently. The obtained results show that described method is an efficient tool for the design of TMS stimulators, which can be applied to a wide range of coil geometries and performance requirements.

  1. Design method of high-efficient 
LED headlamp lens.

    PubMed

    Chen, Fei; Wang, Kai; Qin, Zong; Wu, Dan; Luo, Xiaobing; Liu, Sheng

    2010-09-27

    Low optical efficiency of light-emitting diode (LED) based headlamp is one of the most important issues to obstruct applications of LEDs in headlamp. An effective high-efficient LED headlamp freeform lens design method is introduced in this paper. A low-beam lens and a high-beam lens for LED headlamp are designed according to this method. Monte Carlo ray tracing simulation results demonstrate that the LED headlamp with these two lenses can fully comply with the ECE regulation without any other lens or reflector. Moreover, optical efficiencies of both these two lenses are more than 88% in theory.

  2. Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui

    2018-06-01

    In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.

  3. Intrinsic Brightness Temperatures of AGN Jets

    NASA Astrophysics Data System (ADS)

    Homan, D. C.; Kovalev, Y. Y.; Lister, M. L.; Ros, E.; Kellermann, K. I.; Cohen, M. H.; Vermeulen, R. C.; Zensus, J. A.; Kadler, M.

    2006-05-01

    We present a new method for studying the intrinsic brightness temperatures of the parsec-scale jet cores of active galactic nuclei (AGNs). Our method uses observed superluminal motions and observed brightness temperatures for a large sample of AGNs to constrain the characteristic intrinsic brightness temperature of the sample as a whole. To study changes in intrinsic brightness temperature, we assume that the Doppler factors of individual jets are constant in time, as justified by their relatively small changes in observed flux density. We find that in their median-low brightness temperature state, the sources in our sample have a narrow range of intrinsic brightness temperatures centered on a characteristic temperature, Tint~=3×1010 K, which is close to the value expected for equipartition, when the energy in the radiating particles equals the energy stored in the magnetic fields. However, in their maximum brightness state, we find that sources in our sample have a characteristic intrinsic brightness temperature greater than 2×1011 K, which is well in excess of the equipartition temperature. In this state, we estimate that the energy in radiating particles exceeds the energy in the magnetic field by a factor of ~105. We suggest that the excess of particle energy when sources are in their maximum brightness state is due to injection or acceleration of particles at the base of the jet. Our results suggest that the common method of estimating jet Doppler factors by using a single measurement of observed brightness temperature, the assumption of equipartition, or both may lead to large scatter or systematic errors in the derived values.

  4. Core shifts, magnetic fields and magnetization of extragalactic jets

    NASA Astrophysics Data System (ADS)

    Zdziarski, Andrzej A.; Sikora, Marek; Pjanka, Patryk; Tchekhovskoy, Alexander

    2015-07-01

    We study the effect of radio-jet core shift, which is a dependence of the position of the jet radio core on the observational frequency. We derive a new method of measuring the jet magnetic field based on both the value of the shift and the observed radio flux, which complements the standard method that assumes equipartition. Using both methods, we re-analyse the blazar sample of Zamaninasab et al. We find that equipartition is satisfied only if the jet opening angle in the radio core region is close to the values found observationally, ≃0.1-0.2 divided by the bulk Lorentz factor, Γj. Larger values, e.g. 1/Γj, would imply magnetic fields much above equipartition. A small jet opening angle implies in turn the magnetization parameter of ≪1. We determine the jet magnetic flux taking into account this effect. We find that the transverse-averaged jet magnetic flux is fully compatible with the model of jet formation due to black hole (BH) spin-energy extraction and the accretion being a magnetically arrested disc (MAD). We calculate the jet average mass-flow rate corresponding to this model and find it consists of a substantial fraction of the mass accretion rate. This suggests the jet composition with a large fraction of baryons. We also calculate the average jet power, and find it moderately exceeds the accretion power, dot{M} c^2, reflecting BH spin energy extraction. We find our results for radio galaxies at low Eddington ratios are compatible with MADs but require a low radiative efficiency, as predicted by standard accretion models.

  5. Railroad Classification Yard Technology Manual. Volume I : Yard Design Methods

    DOT National Transportation Integrated Search

    1981-02-01

    This volume documents the procedures and methods associated with the design of railroad classification yards. Subjects include: site location, economic analysis, yard capacity analysis, design of flat yards, overall configuration of hump yards, hump ...

  6. Achieving integration in mixed methods designs-principles and practices.

    PubMed

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  7. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  8. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  9. The equivalent magnetizing method applied to the design of gradient coils for MRI.

    PubMed

    Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart

    2008-01-01

    This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.

  10. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  11. Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods

    PubMed Central

    Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.

    2012-01-01

    Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570

  12. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  13. Predictive Array Design. A method for sampling combinatorial chemistry library space.

    PubMed

    Lipkin, M J; Rose, V S; Wood, J

    2002-01-01

    A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.

  14. Needs and Opportunities for Uncertainty-Based Multidisciplinary Design Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson

    2002-01-01

    This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.

  15. Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.

    1999-01-01

    Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.

  16. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  17. Numerical methods for the design of gradient-index optical coatings.

    PubMed

    Anzengruber, Stephan W; Klann, Esther; Ramlau, Ronny; Tonova, Diana

    2012-12-01

    We formulate the problem of designing gradient-index optical coatings as the task of solving a system of operator equations. We use iterative numerical procedures known from the theory of inverse problems to solve it with respect to the coating refractive index profile and thickness. The mathematical derivations necessary for the application of the procedures are presented, and different numerical methods (Landweber, Newton, and Gauss-Newton methods, Tikhonov minimization with surrogate functionals) are implemented. Procedures for the transformation of the gradient coating designs into quasi-gradient ones (i.e., multilayer stacks of homogeneous layers with different refractive indices) are also developed. The design algorithms work with physically available coating materials that could be produced with the modern coating technologies.

  18. Kinematic Methods of Designing Free Form Shells

    NASA Astrophysics Data System (ADS)

    Korotkiy, V. A.; Khmarova, L. I.

    2017-11-01

    The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.

  19. Preliminary demonstration of a robust controller design method

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1980-01-01

    Alternative computational procedures for obtaining a feedback control law which yields a control signal based on measurable quantitites are evaluated. The three methods evaluated are: (1) the standard linear quadratic regulator design model; (2) minimization of the norm of the feedback matrix, k via nonlinear programming subject to the constraint that the closed loop eigenvalues be in a specified domain in the complex plane; and (3) maximize the angles between the closed loop eigenvectors in combination with minimizing the norm of K also via the constrained nonlinear programming. The third or robust design method was chosen to yield a closed loop system whose eigenvalues are insensitive to small changes in the A and B matrices. The relationship between orthogonality of closed loop eigenvectors and the sensitivity of closed loop eigenvalues is described. Computer programs are described.

  20. Breaking from binaries - using a sequential mixed methods design.

    PubMed

    Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan

    2014-03-01

    To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.

  1. Epidemiological designs for vaccine safety assessment: methods and pitfalls.

    PubMed

    Andrews, Nick

    2012-09-01

    Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues. Copyright © 2011 The International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  2. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    PubMed

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  3. Design of an explosive detection system using Monte Carlo method.

    PubMed

    Hernández-Adame, Pablo Luis; Medina-Castro, Diego; Rodriguez-Ibarra, Johanna Lizbeth; Salas-Luevano, Miguel Angel; Vega-Carrillo, Hector Rene

    2016-11-01

    Regardless the motivation terrorism is the most important risk for the national security in many countries. Attacks with explosives are the most common method used by terrorists. Therefore several procedures to detect explosives are utilized; among these methods are the use of neutrons and photons. In this study the Monte Carlo method an explosive detection system using a 241 AmBe neutron source was designed. In the design light water, paraffin, polyethylene, and graphite were used as moderators. In the work the explosive RDX was used and the induced gamma rays due to neutron capture in the explosive was estimated using NaI(Tl) and HPGe detectors. When light water is used as moderator and HPGe as the detector the system has the best performance allowing distinguishing between the explosive and urea. For the final design the Ambient dose equivalent for neutrons and photons were estimated along the radial and axial axis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Designing A Mixed Methods Study In Primary Care

    PubMed Central

    Creswell, John W.; Fetters, Michael D.; Ivankova, Nataliya V.

    2004-01-01

    BACKGROUND Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. METHODS We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. RESULTS Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. DISCUSSION We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research. PMID:15053277

  5. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  6. Research on Visualization Design Method in the Field of New Media Software Engineering

    NASA Astrophysics Data System (ADS)

    Deqiang, Hu

    2018-03-01

    In the new period of increasingly developed science and technology, with the increasingly fierce competition in the market and the increasing demand of the masses, new design and application methods have emerged in the field of new media software engineering, that is, the visualization design method. Applying the visualization design method to the field of new media software engineering can not only improve the actual operation efficiency of new media software engineering but more importantly the quality of software development can be enhanced by means of certain media of communication and transformation; on this basis, the progress and development of new media software engineering in China are also continuously promoted. Therefore, the application of visualization design method in the field of new media software engineering is analysed concretely in this article from the perspective of the overview of visualization design methods and on the basis of systematic analysis of the basic technology.

  7. Design of Intelligent Hydraulic Excavator Control System Based on PID Method

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiao, Shengjie; Liao, Xiaoming; Yin, Penglong; Wang, Yulin; Si, Kuimao; Zhang, Yi; Gu, Hairong

    Most of the domestic designed hydraulic excavators adopt the constant power design method and set 85%~90% of engine power as the hydraulic system adoption power, it causes high energy loss due to mismatching of power between the engine and the pump. While the variation of the rotational speed of engine could sense the power shift of the load, it provides a new method to adjust the power matching between engine and pump through engine speed. Based on negative flux hydraulic system, an intelligent hydraulic excavator control system was designed based on rotational speed sensing method to improve energy efficiency. The control system was consisted of engine control module, pump power adjusted module, engine idle module and system fault diagnosis module. Special PLC with CAN bus was used to acquired the sensors and adjusts the pump absorption power according to load variation. Four energy saving control strategies with constant power method were employed to improve the fuel utilization. Three power modes (H, S and L mode) were designed to meet different working status; Auto idle function was employed to save energy through two work status detected pressure switches, 1300rpm was setting as the idle speed according to the engine consumption fuel curve. Transient overload function was designed for deep digging within short time without spending extra fuel. An increasing PID method was employed to realize power matching between engine and pump, the rotational speed's variation was taken as the PID algorithm's input; the current of proportional valve of variable displacement pump was the PID's output. The result indicated that the auto idle could decrease fuel consumption by 33.33% compared to work in maximum speed of H mode, the PID control method could take full use of maximum engine power at each power mode and keep the engine speed at stable range. Application of rotational speed sensing method provides a reliable method to improve the excavator's energy efficiency and

  8. AI/OR computational model for integrating qualitative and quantitative design methods

    NASA Technical Reports Server (NTRS)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  9. An improved design method for EPC middleware

    NASA Astrophysics Data System (ADS)

    Lou, Guohuan; Xu, Ran; Yang, Chunming

    2014-04-01

    For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.

  10. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  11. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  12. An artificial viscosity method for the design of supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Mcfadden, G. B.

    1979-01-01

    A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.

  13. Applications of a direct/iterative design method to complex transonic configurations

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1992-01-01

    The current study explores the use of an automated direct/iterative design method for the reduction of drag in transport configurations, including configurations with engine nacelles. The method requires the user to choose a proper target-pressure distribution and then develops a corresponding airfoil section. The method can be applied to two-dimensional airfoil sections or to three-dimensional wings. The three cases that are presented show successful application of the method for reducing drag from various sources. The first two cases demonstrate the use of the method to reduce induced drag by designing to an elliptic span-load distribution and to reduce wave drag by decreasing the shock strength for a given lift. In the second case, a body-mounted nacelle is added and the method is successfully used to eliminate increases in wing drag associated with the nacelle addition by designing to an arbitrary pressure distribution as a result of the redesigning of a wing in combination with a given underwing nacelle to clean-wing, target-pressure distributions. These cases illustrate several possible uses of the method for reducing different types of drag. The magnitude of the obtainable drag reduction varies with the constraints of the problem and the configuration to be modified.

  14. Optimization of rotor shaft shrink fit method for motor using "Robust design"

    NASA Astrophysics Data System (ADS)

    Toma, Eiji

    2018-01-01

    This research is collaborative investigation with the general-purpose motor manufacturer. To review construction method in production process, we applied the parameter design method of quality engineering and tried to approach the optimization of construction method. Conventionally, press-fitting method has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of optimization design of "shrink fitting method by high-frequency induction heating" devised as a new construction method, its construction method was feasible, and it was possible to extract the optimum processing condition.

  15. The Influence of Values and Rich Conditions on Designers' Judgments about Useful Instructional Methods

    ERIC Educational Resources Information Center

    Honebein, Peter C.

    2017-01-01

    An instructional designer's values about instructional methods can be a curse or a cure. On one hand, a designer's love affair for a method may cause them to use that method in situations that are not appropriate. On the other hand, that same love affair may inspire a designer to fight for a method when those in power are willing to settle for a…

  16. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  17. Artificial Instruction. A Method for Relating Learning Theory to Instructional Design.

    ERIC Educational Resources Information Center

    Ohlsson, Stellan

    Prior research on learning has been linked to instruction by the derivation of general principles of instructional design from learning theories. However, such design principles are often difficult to apply to particular instructional issues. A new method for relating research on learning to instructional design is proposed: Different ways of…

  18. Quality by Design: Multidimensional exploration of the design space in high performance liquid chromatography method development for better robustness before validation.

    PubMed

    Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E

    2012-04-06

    Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Designing a mixed methods study in primary care.

    PubMed

    Creswell, John W; Fetters, Michael D; Ivankova, Nataliya V

    2004-01-01

    Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research.

  20. Helicopter flight-control design using an H(2) method

    NASA Technical Reports Server (NTRS)

    Takahashi, Marc D.

    1991-01-01

    Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.

  1. A Requirements-Driven Optimization Method for Acoustic Treatment Design

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2016-01-01

    Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.

  2. A method of network topology optimization design considering application process characteristic

    NASA Astrophysics Data System (ADS)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  3. Design Methods for Load-bearing Elements from Crosslaminated Timber

    NASA Astrophysics Data System (ADS)

    Vilguts, A.; Serdjuks, D.; Goremikins, V.

    2015-11-01

    Cross-laminated timber is an environmentally friendly material, which possesses a decreased level of anisotropy in comparison with the solid and glued timber. Cross-laminated timber could be used for load-bearing walls and slabs of multi-storey timber buildings as well as decking structures of pedestrian and road bridges. Design methods of cross-laminated timber elements subjected to bending and compression with bending were considered. The presented methods were experimentally validated and verified by FEM. Two cross-laminated timber slabs were tested at the action of static load. Pine wood was chosen as a board's material. Freely supported beam with the span equal to 1.9 m, which was loaded by the uniformly distributed load, was a design scheme of the considered plates. The width of the plates was equal to 1 m. The considered cross-laminated timber plates were analysed by FEM method. The comparison of stresses acting in the edge fibres of the plate and the maximum vertical displacements shows that both considered methods can be used for engineering calculations. The difference between the results obtained experimentally and analytically is within the limits from 2 to 31%. The difference in results obtained by effective strength and stiffness and transformed sections methods was not significant.

  4. Limitations of the method of characteristics when applied to axisymmetric hypersonic nozzle design

    NASA Technical Reports Server (NTRS)

    Edwards, Anne C.; Perkins, John N.; Benton, James R.

    1990-01-01

    A design study of axisymmetric hypersonic wind tunnel nozzles was initiated by NASA Langley Research Center with the objective of improving the flow quality of their ground test facilities. Nozzles for Mach 6 air, Mach 13.5 nitrogen, and Mach 17 nitrogen were designed using the Method of Characteristics/Boundary Layer (MOC/BL) approach and were analyzed with a Navier-Stokes solver. Results of the analysis agreed well with design for the Mach 6 case, but revealed oblique shock waves of increasing strength originating from near the inflection point of the Mach 13.5 and Mach 17 nozzles. The findings indicate that the MOC/BL design method has a fundamental limitation that occurs at some Mach number between 6 an 13.5. In order to define the limitation more exactly and attempt to discover the cause, a parametric study of hypersonic ideal air nozzles designed with the current MOC/BL method was done. Results of this study indicate that, while stagnations conditions have a moderate affect on the upper limit of the method, the method fails at Mach numbers above 8.0.

  5. Scenario-based design: A method for connecting information system design with public health operations and emergency management

    PubMed Central

    Reeder, Blaine; Turner, Anne M

    2011-01-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

  6. A Bright Future for Evolutionary Methods in Drug Design.

    PubMed

    Le, Tu C; Winkler, David A

    2015-08-01

    Most medicinal chemists understand that chemical space is extremely large, essentially infinite. Although high-throughput experimental methods allow exploration of drug-like space more rapidly, they are still insufficient to fully exploit the opportunities that such large chemical space offers. Evolutionary methods can synergistically blend automated synthesis and characterization methods with computational design to identify promising regions of chemical space more efficiently. We describe how evolutionary methods are implemented, and provide examples of published drug development research in which these methods have generated molecules with increased efficacy. We anticipate that evolutionary methods will play an important role in future drug discovery. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Fracture control methods for space vehicles. Volume 1: Fracture control design methods. [for space shuttle configuration planning

    NASA Technical Reports Server (NTRS)

    Liu, A. F.

    1974-01-01

    A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.

  8. Rotordynamics and Design Methods of an Oil-Free Turbocharger

    NASA Technical Reports Server (NTRS)

    Howard, Samuel A.

    1999-01-01

    The feasibility of supporting a turbocharger rotor on air foil bearings is investigated based upon predicted rotordynamic stability, load accommodations, and stress considerations. It is demonstrated that foil bearings offer a plausible replacement for oil-lubricated bearings in diesel truck turbochargers. Also, two different rotor configurations are analyzed and the design is chosen which best optimizes the desired performance characteristics. The method of designing machinery for foil bearing use and the assumptions made are discussed.

  9. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  10. First-order design of geodetic networks using the simulated annealing method

    NASA Astrophysics Data System (ADS)

    Berné, J. L.; Baselga, S.

    2004-09-01

    The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.

  11. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  12. Using a mixed-methods design to examine nurse practitioner integration in British Columbia.

    PubMed

    Sangster-Gormley, Esther; Griffith, Janessa; Schreiber, Rita; Borycki, Elizabeth

    2015-07-01

    To discuss and provide examples of how mixed-methods research was used to evaluate the integration of nurse practitioners (NPs) into a Canadian province. Legislation enabling NPs to practise in British Columbia (BC) was enacted in 2005. This research evaluated the integration of NPs and their effect on the BC healthcare system. Data were collected using surveys, focus groups, participant interviews and case studies over three years. Data sources and methods were triangulated to determine how the findings addressed the research questions. The challenges and benefits of using the multiphase design are highlighted in the paper. The multiphase mixed-methods research design was selected because of its applicability to evaluation research. The design proved to be robust and flexible in answering research questions. As sub-studies within the multiphase design are often published separately, it can be difficult for researchers to find examples. This paper highlights ways that a multiphase mixed-methods design can be conducted for researchers unfamiliar with the process.

  13. 40 CFR 53.8 - Designation of reference and equivalent methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Designation of reference and equivalent methods. 53.8 Section 53.8 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8...

  14. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  15. Computerized method and system for designing an aerodynamic focusing lens stack

    DOEpatents

    Gard, Eric [San Francisco, CA; Riot, Vincent [Oakland, CA; Coffee, Keith [Diablo Grande, CA; Woods, Bruce [Livermore, CA; Tobias, Herbert [Kensington, CA; Birch, Jim [Albany, CA; Weisgraber, Todd [Brentwood, CA

    2011-11-22

    A computerized method and system for designing an aerodynamic focusing lens stack, using input from a designer related to, for example, particle size range to be considered, characteristics of the gas to be flowed through the system, the upstream temperature and pressure at the top of a first focusing lens, the flow rate through the aerodynamic focusing lens stack equivalent at atmosphere pressure; and a Stokes number range. Based on the design parameters, the method and system determines the total number of focusing lenses and their respective orifice diameters required to focus the particle size range to be considered, by first calculating for the orifice diameter of the first focusing lens in the Stokes formula, and then using that value to determine, in iterative fashion, intermediate flow values which are themselves used to determine the orifice diameters of each succeeding focusing lens in the stack design, with the results being output to a designer. In addition, the Reynolds numbers associated with each focusing lens as well as exit nozzle size may also be determined to enhance the stack design.

  16. A method of transmissibility design for dual-chamber pneumatic vibration isolator

    NASA Astrophysics Data System (ADS)

    Lee, Jeung-Hoon; Kim, Kwang-Joon

    2009-06-01

    Dual-chamber pneumatic vibration isolators have a wide range of applications for vibration isolation of vibration-sensitive equipment. Recent advances in precision machine tools and instruments such as medical devices and those related to nano-technology require better isolation performance, which can be efficiently achieved by precise modeling- and design- of the isolation system. This paper discusses an efficient transmissibility design method of a pneumatic vibration isolator wherein a complex stiffness model of a dual-chamber pneumatic spring developed in our previous study is employed. Three design parameters, the volume ratio between the two pneumatic chambers, the geometry of the capillary tube connecting the two pneumatic chambers, and, finally, the stiffness of the diaphragm employed for prevention of air leakage, were found to be important factors in transmissibility design. Based on a design technique that maximizes damping of the dual-chamber pneumatic spring, trade-offs among the resonance frequency of transmissibility, peak transmissibility, and transmissibility in high frequency range were found, which were not ever stated in previous researches. Furthermore, this paper discusses the negative role of the diaphragm in transmissibility design. The design method proposed in this paper is illustrated through experimental measurements.

  17. A novel beamformer design method for medical ultrasound. Part I: Theory.

    PubMed

    Ranganathan, Karthik; Walker, William F

    2003-01-01

    The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.

  18. New Design Heaters Using Tubes Finned by Deforming Cutting Method

    NASA Astrophysics Data System (ADS)

    Zubkov, N. N.; Nikitenko, S. M.; Nikitenko, M. S.

    2017-10-01

    The article describes the results of research aimed at selecting and assigning technological processing parameters for obtaining outer fins of heat-exchange tubes by the deformational cutting method, for use in a new design of industrial water-air heaters. The thermohydraulic results of comparative engineering tests of new and standard design air-heaters are presented.

  19. New directions for Artificial Intelligence (AI) methods in optimum design

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1989-01-01

    Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.

  20. Intermittent Fermi-Pasta-Ulam Dynamics at Equilibrium

    NASA Astrophysics Data System (ADS)

    Campbell, David; Danieli, Carlo; Flach, Sergej

    The equilibrium value of an observable defines a manifold in the phase space of an ergodic and equipartitioned many-body syste. A typical trajectory pierces that manifold infinitely often as time goes to infinity. We use these piercings to measure both the relaxation time of the lowest frequency eigenmode of the Fermi-Pasta-Ulam chain, as well as the fluctuations of the subsequent dynamics in equilibrium. We show that previously obtained scaling laws for equipartition times are modified at low energy density due to an unexpected slowing down of the relaxation. The dynamics in equilibrium is characterized by a power-law distribution of excursion times far off equilibrium, with diverging variance. The long excursions arise from sticky dynamics close to regular orbits in the phase space. Our method is generalizable to large classes of many-body systems. The authors acknowledge financial support from IBS (Project Code IBS-R024-D1).

  1. Method for designing gas tag compositions

    DOEpatents

    Gross, Kenny C.

    1995-01-01

    For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node #1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node #2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred.

  2. Method for designing gas tag compositions

    DOEpatents

    Gross, K.C.

    1995-04-11

    For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node No. 1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node No. 2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred. 5 figures.

  3. Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses

    ERIC Educational Resources Information Center

    Christ, Thomas W.

    2009-01-01

    Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…

  4. PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD

    NASA Astrophysics Data System (ADS)

    Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao

    Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.

  5. An ACC Design Method for Achieving Both String Stability and Ride Comfort

    NASA Astrophysics Data System (ADS)

    Yamamura, Yoshinori; Seto, Yoji; Nishira, Hikaru; Kawabe, Taketoshi

    An investigation was made of a method for designing adaptive cruise control (ACC) so as to achieve a headway distance response that feels natural to the driver while at the same time obtaining high levels of both string stability and ride comfort. With this design method, the H∞ norm is adopted as the index of string stability. Additionally, two norms are introduced for evaluating ride comfort and natural vehicle behavior. The relationship between these three norms and headway distance response characteristics was analyzed, and an evaluation method was established for achieving high levels of the various performance characteristics required of ACC. An ACC system designed with this method was evaluated in driving tests conducted on a proving ground course, and the results confirmed that it achieved the targeted levels of string stability, ride comfort and natural vehicle behavior.

  6. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  7. Design principles for elementary gene circuits: Elements, methods, and examples

    NASA Astrophysics Data System (ADS)

    Savageau, Michael A.

    2001-03-01

    The control of gene expression involves complex circuits that exhibit enormous variation in design. For years the most convenient explanation for these variations was historical accident. According to this view, evolution is a haphazard process in which many different designs are generated by chance; there are many ways to accomplish the same thing, and so no further meaning can be attached to such different but equivalent designs. In recent years a more satisfying explanation based on design principles has been found for at least certain aspects of gene circuitry. By design principle we mean a rule that characterizes some biological feature exhibited by a class of systems such that discovery of the rule allows one not only to understand known instances but also to predict new instances within the class. The central importance of gene regulation in modern molecular biology provides strong motivation to search for more of these underlying design principles. The search is in its infancy and there are undoubtedly many design principles that remain to be discovered. The focus of this three-part review will be the class of elementary gene circuits in bacteria. The first part reviews several elements of design that enter into the characterization of elementary gene circuits in prokaryotic organisms. Each of these elements exhibits a variety of realizations whose meaning is generally unclear. The second part reviews mathematical methods used to represent, analyze, and compare alternative designs. Emphasis is placed on particular methods that have been used successfully to identify design principles for elementary gene circuits. The third part reviews four design principles that make specific predictions regarding (1) two alternative modes of gene control, (2) three patterns of coupling gene expression in elementary circuits, (3) two types of switches in inducible gene circuits, and (4) the realizability of alternative gene circuits and their response to phased

  8. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  9. On extracting design principles from biology: I. Method-General answers to high-level design questions for bioinspired robots.

    PubMed

    Haberland, M; Kim, S

    2015-02-02

    When millions of years of evolution suggest a particular design solution, we may be tempted to abandon traditional design methods and copy the biological example. However, biological solutions do not often translate directly into the engineering domain, and even when they do, copying eliminates the opportunity to improve. A better approach is to extract design principles relevant to the task of interest, incorporate them in engineering designs, and vet these candidates against others. This paper presents the first general framework for determining whether biologically inspired relationships between design input variables and output objectives and constraints are applicable to a variety of engineering systems. Using optimization and statistics to generalize the results beyond a particular system, the framework overcomes shortcomings observed of ad hoc methods, particularly those used in the challenging study of legged locomotion. The utility of the framework is demonstrated in a case study of the relative running efficiency of rotary-kneed and telescoping-legged robots.

  10. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective.

    PubMed

    Bishop, Felicity L

    2015-02-01

    To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical

  11. A method to design blended rolled edges for compact range reflectors

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1989-01-01

    A method to design blended rolled edges for arbitrary rim shape compact range reflectors is presented. The reflectors may be center-fed or offset-fed. The method leads to rolled edges with minimal surface discontinuities. It is shown that the reflectors designed using the prescribed method can be defined analytically using simple expressions. A procedure to obtain optimum rolled edges parameter is also presented. The procedure leads to blended rolled edges that minimize the diffracted fields emanating from the junction between the paraboloid and the rolled edge surface while satisfying certain constraints regarding the reflector size and the minimum operating frequency of the system.

  12. A method to design blended rolled edges for compact range reflectors

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Ericksen, Kurt P.; Burnside, Walter D.

    1990-01-01

    A method to design blended rolled edges for arbitrary rim shape compact range reflectors is presented. The reflectors may be center-fed or offset-fed. The method leads to rolled edges with minimal surface discontinuities. It is shown that the reflectors designed using the prescribed method can be defined analytically using simple expressions. A procedure to obtain optimum rolled edges parameters is also presented. The procedure leads to blended rolled edges that minimize the diffracted fields emanating from the junction between the paraboloid and the rolled edge surface while satisfying certain constraints regarding the reflector size and the minimum operating frequency of the system.

  13. A streamline curvature method for design of supercritical and subcritical airfoils

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Brooks, C. W., Jr.

    1974-01-01

    An airfoil design procedure, applicable to both subcritical and supercritical airfoils, is described. The method is based on the streamline curvature velocity equation. Several examples illustrating this method are presented and discussed.

  14. Calibration of resistance factors for drilled shafts for the new FHWA design method.

    DOT National Transportation Integrated Search

    2013-01-01

    The Load and Resistance Factor Design (LRFD) calibration of deep foundation in Louisiana was first completed for driven piles (LTRC Final Report 449) in May 2009 and then for drilled shafts using 1999 FHWA design method (ONeill and Reese method) (...

  15. A structural design decomposition method utilizing substructuring

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1994-01-01

    A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.

  16. Applying Human-Centered Design Methods to Scientific Communication Products

    NASA Astrophysics Data System (ADS)

    Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.

    2016-12-01

    Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.

  17. A generalized sizing method for revolutionary concepts under probabilistic design constraints

    NASA Astrophysics Data System (ADS)

    Nam, Taewoo

    Internal combustion (IC) engines that consume hydrocarbon fuels have dominated the propulsion systems of air-vehicles for the first century of aviation. In recent years, however, growing concern over rapid climate changes and national energy security has galvanized the aerospace community into delving into new alternatives that could challenge the dominance of the IC engine. Nevertheless, traditional aircraft sizing methods have significant shortcomings for the design of such unconventionally powered aircraft. First, the methods are specialized for aircraft powered by IC engines, and thus are not flexible enough to assess revolutionary propulsion concepts that produce propulsive thrust through a completely different energy conversion process. Another deficiency associated with the traditional methods is that a user of these methods must rely heavily on experts' experience and advice for determining appropriate design margins. However, the introduction of revolutionary propulsion systems and energy sources is very likely to entail an unconventional aircraft configuration, which inexorably disqualifies the conjecture of such "connoisseurs" as a means of risk management. Motivated by such deficiencies, this dissertation aims at advancing two aspects of aircraft sizing: (1) to develop a generalized aircraft sizing formulation applicable to a wide range of unconventionally powered aircraft concepts and (2) to formulate a probabilistic optimization technique that is able to quantify appropriate design margins that are tailored towards the level of risk deemed acceptable to a decision maker. A more generalized aircraft sizing formulation, named the Architecture Independent Aircraft Sizing Method (AIASM), was developed for sizing revolutionary aircraft powered by alternative energy sources by modifying several assumptions of the traditional aircraft sizing method. Along with advances in deterministic aircraft sizing, a non-deterministic sizing technique, named the

  18. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.

  19. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  20. A new method for designing dual foil electron beam forming systems. II. Feasibility of practical implementation of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.

  1. What can formal methods offer to digital flight control systems design

    NASA Technical Reports Server (NTRS)

    Good, Donald I.

    1990-01-01

    Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.

  2. The Role of Probabilistic Design Analysis Methods in Safety and Affordability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2016-01-01

    For the last several years, NASA and its contractors have been working together to build space launch systems to commercialize space. Developing commercial affordable and safe launch systems becomes very important and requires a paradigm shift. This paradigm shift enforces the need for an integrated systems engineering environment where cost, safety, reliability, and performance need to be considered to optimize the launch system design. In such an environment, rule based and deterministic engineering design practices alone may not be sufficient to optimize margins and fault tolerance to reduce cost. As a result, introduction of Probabilistic Design Analysis (PDA) methods to support the current deterministic engineering design practices becomes a necessity to reduce cost without compromising reliability and safety. This paper discusses the importance of PDA methods in NASA's new commercial environment, their applications, and the key role they can play in designing reliable, safe, and affordable launch systems. More specifically, this paper discusses: 1) The involvement of NASA in PDA 2) Why PDA is needed 3) A PDA model structure 4) A PDA example application 5) PDA link to safety and affordability.

  3. Structural test of the parameterized-backbone method for protein design.

    PubMed

    Plecs, Joseph J; Harbury, Pehr B; Kim, Peter S; Alber, Tom

    2004-09-03

    Designing new protein folds requires a method for simultaneously optimizing the conformation of the backbone and the side-chains. One approach to this problem is the use of a parameterized backbone, which allows the systematic exploration of families of structures. We report the crystal structure of RH3, a right-handed, three-helix coiled coil that was designed using a parameterized backbone and detailed modeling of core packing. This crystal structure was determined using another rationally designed feature, a metal-binding site that permitted experimental phasing of the X-ray data. RH3 adopted the intended fold, which has not been observed previously in biological proteins. Unanticipated structural asymmetry in the trimer was a principal source of variation within the RH3 structure. The sequence of RH3 differs from that of a previously characterized right-handed tetramer, RH4, at only one position in each 11 amino acid sequence repeat. This close similarity indicates that the design method is sensitive to the core packing interactions that specify the protein structure. Comparison of the structures of RH3 and RH4 indicates that both steric overlap and cavity formation provide strong driving forces for oligomer specificity.

  4. Methods to enable the design of bioactive small molecules targeting RNA.

    PubMed

    Disney, Matthew D; Yildirim, Ilyas; Childs-Disney, Jessica L

    2014-02-21

    RNA is an immensely important target for small molecule therapeutics or chemical probes of function. However, methods that identify, annotate, and optimize RNA-small molecule interactions that could enable the design of compounds that modulate RNA function are in their infancies. This review describes recent approaches that have been developed to understand and optimize RNA motif-small molecule interactions, including structure-activity relationships through sequencing (StARTS), quantitative structure-activity relationships (QSAR), chemical similarity searching, structure-based design and docking, and molecular dynamics (MD) simulations. Case studies described include the design of small molecules targeting RNA expansions, the bacterial A-site, viral RNAs, and telomerase RNA. These approaches can be combined to afford a synergistic method to exploit the myriad of RNA targets in the transcriptome.

  5. Design in mind: eliciting service user and frontline staff perspectives on psychiatric ward design through participatory methods

    PubMed Central

    Csipke, Emese; Papoulias, Constantina; Vitoratou, Silia; Williams, Paul; Rose, Diana; Wykes, Til

    2016-01-01

    Abstract Background: Psychiatric ward design may make an important contribution to patient outcomes and well-being. However, research is hampered by an inability to assess its effects robustly. This paper reports on a study which deployed innovative methods to capture service user and staff perceptions of ward design. Method: User generated measures of the impact of ward design were developed and tested on four acute adult wards using participatory methodology. Additionally, inpatients took photographs to illustrate their experience of the space in two wards. Data were compared across wards. Results: Satisfactory reliability indices emerged based on both service user and staff responses. Black and minority ethnic (BME) service users and those with a psychosis spectrum diagnosis have more positive views of the ward layout and fixtures. Staff members have more positive views than service users, while priorities of staff and service users differ. Inpatient photographs prioritise hygiene, privacy and control and address symbolic aspects of the ward environment. Conclusions: Participatory and visual methodologies can provide robust tools for an evaluation of the impact of psychiatric ward design on users. PMID:26886239

  6. Fast optimization method of designing a wideband metasurface without using the Pancharatnam-Berry phase.

    PubMed

    Sui, Sai; Ma, Hua; Lv, Yueguang; Wang, Jiafu; Li, Zhiqiang; Zhang, Jieqiu; Xu, Zhuo; Qu, Shaobo

    2018-01-22

    Arbitrary control of electromagnetic waves remains a significant challenge although it promises many important applications. Here, we proposed a fast optimization method of designing a wideband metasurface without using the Pancharatnam-Berry (PB) phase, of which the elements are non-absorptive and capable of predicting the wideband and smooth phase-shift. In our design method, the metasurface is composed of low-Q-factor resonant elements without using the PB phase, and is optimized by the genetic algorithm and nonlinear fitting method, having the advantages that the far field scattering patterns can be quickly synthesized by the hybrid array patterns. To validate the design method, a wideband low radar cross section metasurface is demonstrated, showing good feasibility and performance of wideband RCS reduction. This work reveals an opportunity arising from a metasurface in effective manipulation of microwave and flexible fast optimal design method.

  7. Horizontal lifelines - review of regulations and simple design method considering anchorage rigidity.

    PubMed

    Galy, Bertrand; Lan, André

    2018-03-01

    Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.

  8. Design Method For Ultra-High Resolution Linear CCD Imagers

    NASA Astrophysics Data System (ADS)

    Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan

    1984-11-01

    This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.

  9. A new traffic control design method for large networks with signalized intersections

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.; Colony, D. C.; Seldner, K.

    1979-01-01

    The paper presents a traffic control design technique for application to large traffic networks with signalized intersections. It is shown that the design method adopts a macroscopic viewpoint to establish a new traffic modelling procedure in which vehicle platoons are subdivided into main stream queues and turning queues. Optimization of the signal splits minimizes queue lengths in the steady state condition and improves traffic flow conditions, from the viewpoint of the traveling public. Finally, an application of the design method to a traffic network with thirty-three signalized intersections is used to demonstrate the effectiveness of the proposed technique.

  10. Modification of wave propagation and wave travel-time by the presence of magnetic fields in the solar network atmosphere

    NASA Astrophysics Data System (ADS)

    Nutto, C.; Steiner, O.; Schaffenberger, W.; Roth, M.

    2012-02-01

    Context. Observations of waves at frequencies above the acoustic cut-off frequency have revealed vanishing wave travel-times in the vicinity of strong magnetic fields. This detection of apparently evanescent waves, instead of the expected propagating waves, has remained a riddle. Aims: We investigate the influence of a strong magnetic field on the propagation of magneto-acoustic waves in the atmosphere of the solar network. We test whether mode conversion effects can account for the shortening in wave travel-times between different heights in the solar atmosphere. Methods: We carry out numerical simulations of the complex magneto-atmosphere representing the solar magnetic network. In the simulation domain, we artificially excite high frequency waves whose wave travel-times between different height levels we then analyze. Results: The simulations demonstrate that the wave travel-time in the solar magneto-atmosphere is strongly influenced by mode conversion. In a layer enclosing the surface sheet defined by the set of points where the Alfvén speed and the sound speed are equal, called the equipartition level, energy is partially transferred from the fast acoustic mode to the fast magnetic mode. Above the equipartition level, the fast magnetic mode is refracted due to the large gradient of the Alfvén speed. The refractive wave path and the increasing phase speed of the fast mode inside the magnetic canopy significantly reduce the wave travel-time, provided that both observing levels are above the equipartition level. Conclusions: Mode conversion and the resulting excitation and propagation of fast magneto-acoustic waves is responsible for the observation of vanishing wave travel-times in the vicinity of strong magnetic fields. In particular, the wave propagation behavior of the fast mode above the equipartition level may mimic evanescent behavior. The present wave propagation experiments provide an explanation of vanishing wave travel-times as observed with multi

  11. Matching Learning Style Preferences with Suitable Delivery Methods on Textile Design Programmes

    ERIC Educational Resources Information Center

    Sayer, Kate; Studd, Rachel

    2006-01-01

    Textile design is a subject that encompasses both design and technology; aesthetically pleasing patterns and forms must be set within technical parameters to create successful fabrics. When considering education methods in design programmes, identifying the most relevant learning approach is key to creating future successes. Yet are the most…

  12. Full potential methods for analysis/design of complex aerospace configurations

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Szema, Kuo-Yen; Bonner, Ellwood

    1986-01-01

    The steady form of the full potential equation, in conservative form, is employed to analyze and design a wide variety of complex aerodynamic shapes. The nonlinear method is based on the theory of characteristic signal propagation coupled with novel flux biasing concepts and body-fitted mapping procedures. The resulting codes are vectorized for the CRAY XMP and the VPS-32 supercomputers. Use of the full potential nonlinear theory is demonstrated for a single-point supersonic wing design and a multipoint design for transonic maneuver/supersonic cruise/maneuver conditions. Achievement of high aerodynamic efficiency through numerical design is verified by wind tunnel tests. Other studies reported include analyses of a canard/wing/nacelle fighter geometry.

  13. Tradeoff methods in multiobjective insensitive design of airplane control systems

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Giesy, D. P.

    1984-01-01

    The latest results of an ongoing study of computer-aided design of airplane control systems are given. Constrained minimization algorithms are used, with the design objectives in the constraint vector. The concept of Pareto optimiality is briefly reviewed. It is shown how an experienced designer can use it to find designs which are well-balanced in all objectives. Then the problem of finding designs which are insensitive to uncertainty in system parameters are discussed, introducing a probabilistic vector definition of sensitivity which is consistent with the deterministic Pareto optimal problem. Insensitivity is important in any practical design, but it is particularly important in the design of feedback control systems, since it is considered to be the most important distinctive property of feedback control. Methods of tradeoff between deterministic and stochastic-insensitive (SI) design are described, and tradeoff design results are presented for the example of the a Shuttle lateral stability augmentation system. This example is used because careful studies have been made of the uncertainty in Shuttle aerodynamics. Finally, since accurate statistics of uncertain parameters are usually not available, the effects of crude statistical models on SI designs are examined.

  14. How Learning Designs, Teaching Methods and Activities Differ by Discipline in Australian Universities

    ERIC Educational Resources Information Center

    Cameron, Leanne

    2017-01-01

    This paper reports on the learning designs, teaching methods and activities most commonly employed within the disciplines in six universities in Australia. The study sought to establish if there were significant differences between the disciplines in learning designs, teaching methods and teaching activities in the current Australian context, as…

  15. Developing an Engineering Design Process Assessment using Mixed Methods.

    PubMed

    Wind, Stefanie A; Alemdar, Meltem; Lingle, Jeremy A; Gale, Jessica D; Moore, Roxanne A

    Recent reforms in science education worldwide include an emphasis on engineering design as a key component of student proficiency in the Science, Technology, Engineering, and Mathematics disciplines. However, relatively little attention has been directed to the development of psychometrically sound assessments for engineering. This study demonstrates the use of mixed methods to guide the development and revision of K-12 Engineering Design Process (EDP) assessment items. Using results from a middle-school EDP assessment, this study illustrates the combination of quantitative and qualitative techniques to inform item development and revisions. Overall conclusions suggest that the combination of quantitative and qualitative evidence provides an in-depth picture of item quality that can be used to inform the revision and development of EDP assessment items. Researchers and practitioners can use the methods illustrated here to gather validity evidence to support the interpretation and use of new and existing assessments.

  16. Local phase method for designing and optimizing metasurface devices.

    PubMed

    Hsu, Liyi; Dupré, Matthieu; Ndao, Abdoulaye; Yellowhair, Julius; Kanté, Boubacar

    2017-10-16

    Metasurfaces have attracted significant attention due to their novel designs for flat optics. However, the approach usually used to engineer metasurface devices assumes that neighboring elements are identical, by extracting the phase information from simulations with periodic boundaries, or that near-field coupling between particles is negligible, by extracting the phase from single particle simulations. This is not the case most of the time and the approach thus prevents the optimization of devices that operate away from their optimum. Here, we propose a versatile numerical method to obtain the phase of each element within the metasurface (meta-atoms) while accounting for near-field coupling. Quantifying the phase error of each element of the metasurfaces with the proposed local phase method paves the way to the design of highly efficient metasurface devices including, but not limited to, deflectors, high numerical aperture metasurface concentrators, lenses, cloaks, and modulators.

  17. An improved design method of a tuned mass damper for an in-service footbridge

    NASA Astrophysics Data System (ADS)

    Shi, Weixing; Wang, Liangkun; Lu, Zheng

    2018-03-01

    Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.

  18. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  19. An inverse method for the aerodynamic design of three-dimensional aircraft engine nacelles

    NASA Technical Reports Server (NTRS)

    Bell, R. A.; Cedar, R. D.

    1991-01-01

    A fast, efficient and user friendly inverse design system for 3-D nacelles was developed. The system is a product of a 2-D inverse design method originally developed at NASA-Langley and the CFL3D analysis code which was also developed at NASA-Langley and modified for nacelle analysis. The design system uses a predictor/corrector design approach in which an analysis code is used to calculate the flow field for an initial geometry, the geometry is then modified based on the difference between the calculated and target pressures. A detailed discussion of the design method, the process of linking it to the modified CFL3D solver and its extension to 3-D is presented. This is followed by a number of examples of the use of the design system for the design of both axisymmetric and 3-D nacelles.

  20. A Proposal for the use of the Consortium Method in the Design-build system

    NASA Astrophysics Data System (ADS)

    Miyatake, Ichiro; Kudo, Masataka; Kawamata, Hiroyuki; Fueta, Toshiharu

    In view of the necessity for efficient implementation of public works projects, it is expected to utilize advanced technical skills of private firms, for the purpose of reducing project costs, improving performance and functions of construction objects, and reducing work periods, etc. The design-build system is a method to order design and construction as a single contract, including design of structural forms and main specifications of the construction object. This is a system in which high techniques of private firms can be utilized, as a means to ensure qualities of design and construction, rational design, and efficiency of the project. The objective of this study is to examine the use of a method to form a consortium of civil engineering consultants and construction companies, as it is an issue related to the implementation of the design-build method. Furthermore, by studying various forms of consortiums to be introduced in future, it proposes procedural items required to utilize this method, during the bid and after signing a contract, such as the estimate submission from the civil engineering consultants etc.

  1. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  2. Rationale, Design, and Methods of the Preschool ADHD Treatment Study (PATS)

    ERIC Educational Resources Information Center

    Kollins, Scott; Greenhill, Laurence; Swanson, James; Wigal, Sharon; Abikoff, Howard; McCracken, James; Riddle, Mark; McGough, James; Vitiello, Benedetto; Wigal, Tim; Skrobala, Anne; Posner, Kelly; Ghuman, Jaswinder; Davies, Mark; Cunningham, Charles; Bauzo, Audrey

    2006-01-01

    Objective: To describe the rationale and design of the Preschool ADHD Treatment Study (PATS). Method: PATS was a National Institutes of Mental Health-funded, multicenter, randomized, efficacy trial designed to evaluate the short-term (5 weeks) efficacy and long-term (40 weeks) safety of methylphenidate (MPH) in preschoolers with…

  3. Methods to enable the design of bioactive small molecules targeting RNA

    PubMed Central

    Disney, Matthew D.; Yildirim, Ilyas; Childs-Disney, Jessica L.

    2014-01-01

    RNA is an immensely important target for small molecule therapeutics or chemical probes of function. However, methods that identify, annotate, and optimize RNA-small molecule interactions that could enable the design of compounds that modulate RNA function are in their infancies. This review describes recent approaches that have been developed to understand and optimize RNA motif-small molecule interactions, including Structure-Activity Relationships Through Sequencing (StARTS), quantitative structure-activity relationships (QSAR), chemical similarity searching, structure-based design and docking, and molecular dynamics (MD) simulations. Case studies described include the design of small molecules targeting RNA expansions, the bacterial A-site, viral RNAs, and telomerase RNA. These approaches can be combined to afford a synergistic method to exploit the myriad of RNA targets in the transcriptome. PMID:24357181

  4. Trends in study design and the statistical methods employed in a leading general medicine journal.

    PubMed

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing

  5. Methods in Enzymology: “Flexible backbone sampling methods to model and design protein alternative conformations”

    PubMed Central

    Ollikainen, Noah; Smith, Colin A.; Fraser, James S.; Kortemme, Tanja

    2013-01-01

    Sampling alternative conformations is key to understanding how proteins work and engineering them for new functions. However, accurately characterizing and modeling protein conformational ensembles remains experimentally and computationally challenging. These challenges must be met before protein conformational heterogeneity can be exploited in protein engineering and design. Here, as a stepping stone, we describe methods to detect alternative conformations in proteins and strategies to model these near-native conformational changes based on backrub-type Monte Carlo moves in Rosetta. We illustrate how Rosetta simulations that apply backrub moves improve modeling of point mutant side chain conformations, native side chain conformational heterogeneity, functional conformational changes, tolerated sequence space, protein interaction specificity, and amino acid co-variation across protein-protein interfaces. We include relevant Rosetta command lines and RosettaScripts to encourage the application of these types of simulations to other systems. Our work highlights that critical scoring and sampling improvements will be necessary to approximate conformational landscapes. Challenges for the future development of these methods include modeling conformational changes that propagate away from designed mutation sites and modulating backbone flexibility to predictively design functionally important conformational heterogeneity. PMID:23422426

  6. Research and Design of Rootkit Detection Method

    NASA Astrophysics Data System (ADS)

    Liu, Leian; Yin, Zuanxing; Shen, Yuli; Lin, Haitao; Wang, Hongjiang

    Rootkit is one of the most important issues of network communication systems, which is related to the security and privacy of Internet users. Because of the existence of the back door of the operating system, a hacker can use rootkit to attack and invade other people's computers and thus he can capture passwords and message traffic to and from these computers easily. With the development of the rootkit technology, its applications are more and more extensive and it becomes increasingly difficult to detect it. In addition, for various reasons such as trade secrets, being difficult to be developed, and so on, the rootkit detection technology information and effective tools are still relatively scarce. In this paper, based on the in-depth analysis of the rootkit detection technology, a new kind of the rootkit detection structure is designed and a new method (software), X-Anti, is proposed. Test results show that software designed based on structure proposed is much more efficient than any other rootkit detection software.

  7. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.

  8. 40 CFR 53.11 - Cancellation of reference or equivalent method designation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Cancellation of reference or equivalent method designation. 53.11 Section 53.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General...

  9. Application of computational aerodynamics methods to the design and analysis of transport aircraft

    NASA Technical Reports Server (NTRS)

    Da Costa, A. L.

    1978-01-01

    The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

  10. General design method of ultra-broadband perfect absorbers based on magnetic polaritons.

    PubMed

    Liu, Yuanbin; Qiu, Jun; Zhao, Junming; Liu, Linhua

    2017-10-02

    Starting from one-dimensional gratings and the theory of magnetic polaritons (MPs), we propose a general design method of ultra-broadband perfect absorbers. Based on the proposed design method, the obtained absorber can keep the spectrum-average absorptance over 99% at normal incidence in a wide range of wavelengths; this work simultaneously reveals the robustness of the absorber to incident angles and polarization angles of incident light. Furthermore, this work shows that the spectral band of perfect absorption can be flexibly extended to near the infrared regime by adjusting the structure dimension. The findings of this work may facilitate the active design of ultra-broadband absorbers based on plasmonic nanostructures.

  11. Application of Function-Failure Similarity Method to Rotorcraft Component Design

    NASA Technical Reports Server (NTRS)

    Roberts, Rory A.; Stone, Robert E.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Performance and safety are the top concerns of high-risk aerospace applications at NASA. Eliminating or reducing performance and safety problems can be achieved with a thorough understanding of potential failure modes in the designs that lead to these problems. The majority of techniques use prior knowledge and experience as well as Failure Modes and Effects as methods to determine potential failure modes of aircraft. During the design of aircraft, a general technique is needed to ensure that every potential failure mode is considered, while avoiding spending time on improbable failure modes. In this work, this is accomplished by mapping failure modes to specific components, which are described by their functionality. The failure modes are then linked to the basic functions that are carried within the components of the aircraft. Using this technique, designers can examine the basic functions, and select appropriate analyses to eliminate or design out the potential failure modes. The fundamentals of this method were previously introduced for a simple rotating machine test rig with basic functions that are common to a rotorcraft. In this paper, this technique is applied to the engine and power train of a rotorcraft, using failures and functions obtained from accident reports and engineering drawings.

  12. The value of value of information: best informing research design and prioritization using current methods.

    PubMed

    Eckermann, Simon; Karnon, Jon; Willan, Andrew R

    2010-01-01

    Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of

  13. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  14. Connecting Generations: Developing Co-Design Methods for Older Adults and Children

    ERIC Educational Resources Information Center

    Xie, Bo; Druin, Allison; Fails, Jerry; Massey, Sheri; Golub, Evan; Franckel, Sonia; Schneider, Kiki

    2012-01-01

    As new technologies emerge that can bring older adults together with children, little has been discussed by researchers concerning the design methods used to create these new technologies. Giving both children and older adults a voice in a shared design process comes with many challenges. This paper details an exploratory study focusing on…

  15. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  16. Oxygen-resistant hydrogenases and methods for designing and making same

    DOEpatents

    King, Paul [Golden, CO; Ghirardi, Maria L [Lakewood, CO; Seibert, Michael [Lakewood, CO

    2009-03-10

    The invention provides oxygen- resistant iron-hydrogenases ([Fe]-hydrogenases) for use in the production of H2. Methods used in the design and engineering of the oxygen-resistant [Fe]-hydrogenases are disclosed, as are the methods of transforming and culturing appropriate host cells with the oxygen-resistant [Fe]-hydrogenases. Finally, the invention provides methods for utilizing the transformed, oxygen insensitive, host cells in the bulk production of H.sub.2 in a light catalyzed reaction having water as the reactant.

  17. Oxygen-resistant hydrogenases and methods for designing and making same

    DOEpatents

    King, Paul; Ghirardi, Maria Lucia; Seibert, Michael

    2014-03-04

    The invention provides oxygen-resistant iron-hydrogenases ([Fe]-hydrogenases) for use in the production of H.sub.2. Methods used in the design and engineering of the oxygen-resistant [Fe]-hydrogenases are disclosed, as are the methods of transforming and culturing appropriate host cells with the oxygen-resistant [Fe]-hydrogenases. Finally, the invention provides methods for utilizing the transformed, oxygen insensitive, host cells in the bulk production of H.sub.2 in a light catalyzed reaction having water as the reactant.

  18. Analytical methods in the high conversion reactor core design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeggel, W.; Oldekop, W.; Axmann, J.K.

    High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less

  19. Fully three-dimensional and viscous semi-inverse method for axial/radial turbomachine blade design

    NASA Astrophysics Data System (ADS)

    Ji, Min

    2008-10-01

    A fully three-dimensional viscous semi-inverse method for the design of turbomachine blades is presented in this work. Built on a time marching Reynolds-Averaged Navier-Stokes solver, the inverse scheme is capable of designing axial/radial turbomachinery blades in flow regimes ranging from very low Mach number to transonic/supersonic flows. In order to solve flow at all-speed conditions, the preconditioning technique is incorporated into the basic JST time-marching scheme. The accuracy of the resulting flow solver is verified with documented experimental data and commercial CFD codes. The level of accuracy of the flow solver exhibited in those verification cases is typical of CFD analysis employed in the design process in industry. The inverse method described in the present work takes pressure loading and blade thickness as prescribed quantities and computes the corresponding three-dimensional blade camber surface. In order to have the option of imposing geometrical constraints on the designed blade shapes, a new inverse algorithm is developed to solve the camber surface at specified spanwise pseudo stream-tubes (i.e. along grid lines), while the blade geometry is constructed through ruling (e.g. straight-line element) at the remaining spanwise stations. The new inverse algorithm involves re-formulating the boundary condition on the blade surfaces as a hybrid inverse/analysis boundary condition, preserving the full three-dimensional nature of the flow. The new design procedure can be interpreted as a fully three-dimensional viscous semi-inverse method. The ruled surface design ensures the blade surface smoothness and mechanical integrity as well as achieves cost reduction for the manufacturing process. A numerical target shooting experiment for a mixed flow impeller shows that the semi-inverse method is able to accurately recover the target blade composed of straightline element from a different initial blade. The semi-inverse method is proved to work well with

  20. Synthesis of calculational methods for design and analysis of radiation shields for nuclear rocket systems

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.; Jordan, T. A.; Soltesz, R. G.; Woodsum, H. C.

    1969-01-01

    Eight computer programs make up a nine volume synthesis containing two design methods for nuclear rocket radiation shields. The first design method is appropriate for parametric and preliminary studies, while the second accomplishes the verification of a final nuclear rocket reactor design.

  1. Method of transition from 3D model to its ontological representation in aircraft design process

    NASA Astrophysics Data System (ADS)

    Govorkov, A. S.; Zhilyaev, A. S.; Fokin, I. V.

    2018-05-01

    This paper proposes the method of transition from a 3D model to its ontological representation and describes its usage in the aircraft design process. The problems of design for manufacturability and design automation are also discussed. The introduced method is to aim to ease the process of data exchange between important aircraft design phases, namely engineering and design control. The method is also intended to increase design speed and 3D model customizability. This requires careful selection of the complex systems (CAD / CAM / CAE / PDM), providing the basis for the integration of design and technological preparation of production and more fully take into account the characteristics of products and processes for their manufacture. It is important to solve this problem, as investment in the automation define the company's competitiveness in the years ahead.

  2. Mixed-Methods Design in Biology Education Research: Approach and Uses

    PubMed Central

    Warfa, Abdi-Rizak M.

    2016-01-01

    Educational research often requires mixing different research methodologies to strengthen findings, better contextualize or explain results, or minimize the weaknesses of a single method. This article provides practical guidelines on how to conduct such research in biology education, with a focus on mixed-methods research (MMR) that uses both quantitative and qualitative inquiries. Specifically, the paper provides an overview of mixed-methods design typologies most relevant in biology education research. It also discusses common methodological issues that may arise in mixed-methods studies and ways to address them. The paper concludes with recommendations on how to report and write about MMR. PMID:27856556

  3. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  4. Process optimization using combinatorial design principles: parallel synthesis and design of experiment methods.

    PubMed

    Gooding, Owen W

    2004-06-01

    The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.

  5. Brown-York quasilocal energy in Lanczos-Lovelock gravity and black hole horizons

    NASA Astrophysics Data System (ADS)

    Chakraborty, Sumanta; Dadhich, Naresh

    2015-12-01

    A standard candidate for quasilocal energy in general relativity is the Brown-York energy, which is essentially a two dimensional surface integral of the extrinsic curvature on the two-boundary of a spacelike hypersurface referenced to flat spacetime. Several years back one of us had conjectured that the black hole horizon is defined by equipartition of gravitational and non-gravitational energy. By employing the above definition of quasilocal Brown-York energy, we have verified the equipartition conjecture for static charged and charged axi-symmetric black holes in general relativity. We have further generalized the Brown-York formalism to all orders in Lanczos-Lovelock theories of gravity and have verified the conjecture for pure Lovelock charged black hole in all even d = 2 m + 2 dimensions, where m is the degree of Lovelock action. It turns out that the equipartition conjecture works only for pure Lovelock, and not for Einstein-Lovelock black holes.

  6. Efficient numerical method of freeform lens design for arbitrary irradiance shaping

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek

    2018-05-01

    A computational method to design a lens with a flat entrance surface and a freeform exit surface that can transform a collimated, generally non-uniform input beam into a beam with a desired irradiance distribution of arbitrary shape is presented. The methodology is based on non-linear elliptic partial differential equations, known as Monge-Ampère PDEs. This paper describes an original numerical algorithm to solve this problem by applying the Gauss-Seidel method with simplified boundary conditions. A joint MATLAB-ZEMAX environment is used to implement and verify the method. To prove the efficiency of the proposed approach, an exemplary study where the designed lens is faced with the challenging illumination task is shown. An analysis of solution stability, iteration-to-iteration ray mapping evolution (attached in video format), depth of focus and non-zero étendue efficiency is performed.

  7. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2014-04-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.

  8. Virginia method for the design of dense-graded emulsion mixes.

    DOT National Transportation Integrated Search

    1982-01-01

    An investigation into the Illinois method for the design of dense-graded emulsion base mixes had resulted in a report offering several modifications to that procedure. The Bituminous Research Advisory Committee then recommended that the Illinois meth...

  9. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Quasi-Experimental Designs.

    PubMed

    Schweizer, Marin L; Braun, Barbara I; Milstone, Aaron M

    2016-10-01

    Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt, nonrandomized interventions. Quasi-experimental studies can be categorized into 3 major types: interrupted time-series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship, including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. Infect Control Hosp Epidemiol 2016;1-6.

  10. A Design Method for Topologically Insulating Metamaterials

    NASA Astrophysics Data System (ADS)

    Matlack, Kathryn; Serra-Garcia, Marc; Palermo, Antonio; Huber, Sebastian; Daraio, Chiara

    Topological insulators are a unique class of electronic materials that exhibit protected edge states that are insulating in the bulk, and immune to back-scattering and defects. Discrete models, such as mass-spring systems, provide a means to translate these properties, based on the quantum hall spin effect, to the mechanical domain. This talk will present how to engineer a 2D mechanical metamaterial that supports topologically-protected and defect-immune edge states, directly from the mass-spring model of a topological insulator. The design method uses combinatorial searches plus gradient-based optimizations to determine the configuration of the metamaterials building blocks that leads to the global behavior specified by the target mass-spring model. We use metamaterials with weakly coupled unit cells to isolate the dynamics within our frequency range of interest and to enable a systematic design process. This approach can generally be applied to implement behaviors of a discrete model directly in mechanical, acoustic, or photonic metamaterials within the weak-coupling regime. This work was partially supported by the ETH Postdoctoral Fellowship, and by the Swiss National Science Foundation.

  11. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    NASA Astrophysics Data System (ADS)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  12. Design optimization of hydraulic turbine draft tube based on CFD and DOE method

    NASA Astrophysics Data System (ADS)

    Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin

    2018-03-01

    In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.

  13. A PC-based inverse design method for radial and mixed flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Skoe, Ivar Helge

    1991-01-01

    An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.

  14. Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces

    NASA Technical Reports Server (NTRS)

    Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John

    2011-01-01

    Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.

  15. Designation and verification of road markings detection and guidance method

    NASA Astrophysics Data System (ADS)

    Wang, Runze; Jian, Yabin; Li, Xiyuan; Shang, Yonghong; Wang, Jing; Zhang, JingChuan

    2018-01-01

    With the rapid development of China's space industry, digitization and intelligent is the tendency of the future. This report is present a foundation research about guidance system which based on the HSV color space. With the help of these research which will help to design the automatic navigation and parking system for the frock transport car and the infrared lamp homogeneity intelligent test equipment. The drive mode, steer mode as well as the navigation method was selected. In consideration of the practicability, it was determined to use the front-wheel-steering chassis. The steering mechanism was controlled by the stepping motors, and it is guided by Machine Vision. The optimization and calibration of the steering mechanism was made. A mathematical model was built and the objective functions was constructed for the steering mechanism. The extraction method of the steering line was studied and the motion controller was designed and optimized. The theory of HSV, RGB color space and analysis of the testing result will be discussed Using the function library OPENCV on the Linux system to fulfill the camera calibration. Based on the HSV color space to design the guidance algorithm.

  16. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A new decentralised controller design method for a class of strongly interconnected systems

    NASA Astrophysics Data System (ADS)

    Duan, Zhisheng; Jiang, Zhong-Ping; Huang, Lin

    2017-02-01

    In this paper, two interconnected structures are first discussed, under which some closed-loop subsystems must be unstable to make the whole interconnected system stable, which can be viewed as a kind of strongly interconnected systems. Then, comparisons with small gain theorem are discussed and large gain interconnected characteristics are shown. A new approach for the design of decentralised controllers is presented by determining the Lyapunov function structure previously, which allows the existence of unstable subsystems. By fully utilising the orthogonal space information of input matrix, some new understandings are presented for the construction of Lyapunov matrix. This new method can deal with decentralised state feedback, static output feedback and dynamic output feedback controllers in a unified framework. Furthermore, in order to reduce the design conservativeness and deal with robustness, a new robust decentralised controller design method is given by combining with the parameter-dependent Lyapunov function method. Some basic rules are provided for the choice of initial variables in Lyapunov matrix or new introduced slack matrices. As byproducts, some linear matrix inequality based sufficient conditions are established for centralised static output feedback stabilisation. Effects of unstable subsystems in nonlinear Lur'e systems are further discussed. The corresponding decentralised controller design method is presented for absolute stability. The examples illustrate that the new method is significantly effective.

  18. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  19. An efficient temporal database design method based on EER

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Huang, Jiping; Miao, Hua

    2007-12-01

    Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.

  20. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient

    PubMed Central

    Feng, Shuo

    2014-01-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420

  1. Design for a Manufacturing Method for Memristor-Based Neuromorphic Computing Processors

    DTIC Science & Technology

    2013-03-01

    DESIGN FOR A MANUFACTURING METHOD FOR MEMRISTOR- BASED NEUROMORPHIC COMPUTING PROCESSORS UNIVERSITY OF PITTSBURGH MARCH 2013...BASED NEUROMORPHIC COMPUTING PROCESSORS 5a. CONTRACT NUMBER FA8750-11-1-0271 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S...synapses and implemented a neuromorphic computing system based on our proposed synapse designs. The robustness of our system is also evaluated by

  2. The Innovation Blaze-Method of Development Professional Thinking Designers in the Modern Higher Education

    ERIC Educational Resources Information Center

    Alekseeva, Irina V.; Barsukova, Natalia I.; Pallotta, Valentina I.; Skovorodnikova, Nadia A.

    2017-01-01

    This article proved the urgency of the problem of development of professional thinking of students studying design in modern conditions of higher education. The authors substantiate for the need of an innovative Blaise-method development of professional design thinking of students in higher education. "Blaise-method" named by us in…

  3. Report on Component 2 - Designing New Methods for Visualizing Text in Spatial Contexts

    DTIC Science & Technology

    2012-10-31

    W9132V-11-P-0010 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Alexander Savelyev , Scott Pezanowski, Anthony C. Robinson, and Alan M...e Component 2 – Designing New Methods for Visualizing Text in Spatial Contexts Alexander Savelyev , Scott Pezanowski, Anthony Robinson and Alan...Center, Penn State University Report on Component 2: Component 2 – Designing New Methods for Visualizing Text in Spatial Contexts Alexander

  4. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  5. Medicinal Chemistry Projects Requiring Imaginative Structure-Based Drug Design Methods.

    PubMed

    Moitessier, Nicolas; Pottel, Joshua; Therrien, Eric; Englebienne, Pablo; Liu, Zhaomin; Tomberg, Anna; Corbeil, Christopher R

    2016-09-20

    Computational methods for docking small molecules to proteins are prominent in drug discovery. There are hundreds, if not thousands, of documented examples-and several pertinent cases within our research program. Fifteen years ago, our first docking-guided drug design project yielded nanomolar metalloproteinase inhibitors and illustrated the potential of structure-based drug design. Subsequent applications of docking programs to the design of integrin antagonists, BACE-1 inhibitors, and aminoglycosides binding to bacterial RNA demonstrated that available docking programs needed significant improvement. At that time, docking programs primarily considered flexible ligands and rigid proteins. We demonstrated that accounting for protein flexibility, employing displaceable water molecules, and using ligand-based pharmacophores improved the docking accuracy of existing methods-enabling the design of bioactive molecules. The success prompted the development of our own program, Fitted, implementing all of these aspects. The primary motivation has always been to respond to the needs of drug design studies; the majority of the concepts behind the evolution of Fitted are rooted in medicinal chemistry projects and collaborations. Several examples follow: (1) Searching for HDAC inhibitors led us to develop methods considering drug-zinc coordination and its effect on the pKa of surrounding residues. (2) Targeting covalent prolyl oligopeptidase (POP) inhibitors prompted an update to Fitted to identify reactive groups and form bonds with a given residue (e.g., a catalytic residue) when the geometry allows it. Fitted-the first fully automated covalent docking program-was successfully applied to the discovery of four new classes of covalent POP inhibitors. As a result, efficient stereoselective syntheses of a few screening hits were prioritized rather than synthesizing large chemical libraries-yielding nanomolar inhibitors. (3) In order to study the metabolism of POP inhibitors by

  6. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  7. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  8. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  9. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  10. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  11. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  12. Treatment of Early-Onset Schizophrenia Spectrum Disorders (TEOSS): Rationale, Design, and Methods

    ERIC Educational Resources Information Center

    McClellan, Jon; Sikich, Linmarie; Findling, Robert L.; Frazier, Jean A.; Vitiello, Benedetto; Hlastala, Stefanie A.; Williams, Emily; Ambler, Denisse; Hunt-Harrison, Tyehimba; Maloney, Ann E.; Ritz, Louise; Anderson, Robert; Hamer, Robert M.; Lieberman, Jeffrey A.

    2007-01-01

    Objective: The Treatment of Early Onset Schizophrenia Spectrum Disorders Study is a publicly funded clinical trial designed to compare the therapeutic benefits, safety, and tolerability of risperidone, olanzapine, and molindone in youths with early-onset schizophrenia spectrum disorders. The rationale, design, and methods of the Treatment of Early…

  13. Aircraft directional stability and vertical tail design: A review of semi-empirical methods

    NASA Astrophysics Data System (ADS)

    Ciliberti, Danilo; Della Vecchia, Pierluigi; Nicolosi, Fabrizio; De Marco, Agostino

    2017-11-01

    Aircraft directional stability and control are related to vertical tail design. The safety, performance, and flight qualities of an aircraft also depend on a correct empennage sizing. Specifically, the vertical tail is responsible for the aircraft yaw stability and control. If these characteristics are not well balanced, the entire aircraft design may fail. Stability and control are often evaluated, especially in the preliminary design phase, with semi-empirical methods, which are based on the results of experimental investigations performed in the past decades, and occasionally are merged with data provided by theoretical assumptions. This paper reviews the standard semi-empirical methods usually applied in the estimation of airplane directional stability derivatives in preliminary design, highlighting the advantages and drawbacks of these approaches that were developed from wind tunnel tests performed mainly on fighter airplane configurations of the first decades of the past century, and discussing their applicability on current transport aircraft configurations. Recent investigations made by the authors have shown the limit of these methods, proving the existence of aerodynamic interference effects in sideslip conditions which are not adequately considered in classical formulations. The article continues with a concise review of the numerical methods for aerodynamics and their applicability in aircraft design, highlighting how Reynolds-Averaged Navier-Stokes (RANS) solvers are well-suited to attain reliable results in attached flow conditions, with reasonable computational times. From the results of RANS simulations on a modular model of a representative regional turboprop airplane layout, the authors have developed a modern method to evaluate the vertical tail and fuselage contributions to aircraft directional stability. The investigation on the modular model has permitted an effective analysis of the aerodynamic interference effects by moving, changing, and

  14. Requirements controlled design: A method for discovery of discontinuous system boundaries in the requirements hyperspace

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Peter Michael

    The drive toward robust systems design, especially with respect to system affordability throughout the system life-cycle, has led to the development of several advanced design methods. While these methods have been extremely successful in satisfying the needs for which they have been developed, they inherently leave a critical area unaddressed. None of them fully considers the effect of requirements on the selection of solution systems. The goal of all of current modern design methodologies is to bring knowledge forward in the design process to the regions where more design freedom is available and design changes cost less. Therefore, it seems reasonable to consider the point in the design process where the greatest restrictions are placed on the final design, the point in which the system level requirements are set. Historically the requirements have been treated as something handed down from above. However, neither the customer nor the solution provider completely understood all of the options that are available in the broader requirements space. If a method were developed that provided the ability to understand the full scope of the requirements space, it would allow for a better comparison of potential solution systems with respect to both the current and potential future requirements. The key to a requirements conscious method is to treat requirements differently from the traditional approach. The method proposed herein is known as Requirements Controlled Design (RCD). By treating the requirements as a set of variables that control the behavior of the system, instead of variables that only define the response of the system, it is possible to determine a-priori what portions of the requirements space that any given system is capable of satisfying. Additionally, it should be possible to identify which systems can satisfy a given set of requirements and the locations where a small change in one or more requirements poses a significant risk to a design program

  15. Quality-by-design-based ultra high performance liquid chromatography related substances method development by establishing the proficient design space for sumatriptan and naproxen combination.

    PubMed

    Patel, Prinesh N; Karakam, Vijaya Saradhi; Samanthula, Gananadhamu; Ragampeta, Srinivas

    2015-10-01

    Quality-by-design-based methods hold greater level of confidence for variations and greater success in method transfer. A quality-by-design-based ultra high performance liquid chromatography method was developed for the simultaneous assay of sumatriptan and naproxen along with their related substances. The first screening was performed by fractional factorial design comprising 44 experiments for reversed-phase stationary phases, pH, and organic modifiers. The results of screening design experiments suggested phenyl hexyl column and acetonitrile were the best combination. The method was further optimized for flow rate, temperature, and gradient time by experimental design of 20 experiments and the knowledge space was generated for effect of variable on response (number of peaks ≥ 1.50 - resolution). Proficient design space was generated from knowledge space by applying Monte Carlo simulation to successfully integrate quantitative robustness metrics during optimization stage itself. The final method provided the robust performance which was verified and validated. Final conditions comprised Waters® Acquity phenyl hexyl column with gradient elution using ammonium acetate (pH 4.12, 0.02 M) buffer and acetonitrile at 0.355 mL/min flow rate and 30°C. The developed method separates all 13 analytes within a 15 min run time with fewer experiments compared to the traditional quality-by-testing approach. ©2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Design method of LED rear fog lamp based on freeform micro-surface reflectors

    NASA Astrophysics Data System (ADS)

    Yu, Jindong; Wu, Heng

    2017-11-01

    We propose a practical method for the design of a light-emitting diode (LED) rear fog lamp based on freeform micro-surface reflectors. The lamp consists of nine LEDs and each of them has a freeform micro-surface reflector correspondingly. The micro-surface reflector design includes three steps. An initial freeform reflector is first built based on the light energy maps. The micro-surface reflector is then constructed on the bias of the initial one. Finally, a two-step method is designed to optimize the micro-surface reflector. With the proposed method, a module is designed and LCW DURIS E5 LED source whose emitting surface is 5.7 mm × 3.0 mm is adopted for simulation. A prototype is also assembled and fabricated to verify the real performance. Both the simulation and experimental results demonstrate that the luminous intensity distribution can well fulfill the requirements of ECE No.38 regulation. Furthermore, more than 79% energy can be saved when compared with the rear fog lamps using conventional sources.

  17. Design and Optimization Method of a Two-Disk Rotor System

    NASA Astrophysics Data System (ADS)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  18. An improved design method based on polyphase components for digital FIR filters

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No

    2017-11-01

    This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.

  19. A minimum cost tolerance allocation method for rocket engines and robust rocket engine design

    NASA Technical Reports Server (NTRS)

    Gerth, Richard J.

    1993-01-01

    Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize methods such as Quality Function Deployment and Taguchi methods. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and optimization approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an optimal combination of tolerances that would result in the minimum cost without any degradation in system performance.

  20. The "neutron channel design"—A method for gaining the desired neutrons

    NASA Astrophysics Data System (ADS)

    Hu, G.; Hu, H. S.; Wang, S.; Pan, Z. H.; Jia, Q. G.; Yan, M. F.

    2016-12-01

    The neutrons with desired parameters can be obtained after initial neutrons penetrating various structure and component of the material. A novel method, the "neutron channel design", is proposed in this investigation for gaining the desired neutrons. It is established by employing genetic algorithm (GA) combining with Monte Carlo software. This method is verified by obtaining 0.01eV to 1.0eV neutrons from the Compact Accelerator-driven Neutron Source (CANS). One layer polyethylene (PE) moderator was designed and installed behind the beryllium target in CANS. The simulations and the experiment for detection the neutrons were carried out. The neutron spectrum at 500cm from the PE moderator was simulated by MCNP and PHITS software. The counts of 0.01eV to 1.0eV neutrons were simulated by MCNP and detected by the thermal neutron detector in the experiment. These data were compared and analyzed. Then this method is researched on designing the complex structure of PE and the composite material consisting of PE, lead and zirconium dioxide.

  1. TUNNEL LINING DESIGN METHOD BY FRAME STRUCTURE ANALYSIS USING GROUND REACTION CURVE

    NASA Astrophysics Data System (ADS)

    Sugimoto, Mitsutaka; Sramoon, Aphichat; Okazaki, Mari

    Both of NATM and shield tunnelling method can be applied to Diluvial and Neogene deposit, on which mega cities are located in Japan. Since the lining design method for both tunnelling methods are much different, the unified concept for tunnel lining design is expected. Therefore, in this research, a frame structure analysis model for tunnel lining design using the ground reaction curve was developed, which can take into account the earth pressure due to excavated surface displacement to active side including the effect of ground self-stabilization, and the excavated surface displacement before lining installation. Based on the developed model, a parameter study was carried out taking coefficient of subgrade reaction and grouting rate as a parameter, and the measured earth pressure acting on the lining at the site was compared with the calculated one by the developed model and the conventional model. As a result, it was confirmed that the developed model can represent earth pressure acting on the lining, lining displacement, and lining sectional force at ground ranging from soft ground to stiff ground.

  2. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  3. Co-Designing and Co-Teaching Graduate Qualitative Methods: An Innovative Ethnographic Workshop Model

    ERIC Educational Resources Information Center

    Cordner, Alissa; Klein, Peter T.; Baiocchi, Gianpaolo

    2012-01-01

    This article describes an innovative collaboration between graduate students and a faculty member to co-design and co-teach a graduate-level workshop-style qualitative methods course. The goal of co-designing and co-teaching the course was to involve advanced graduate students in all aspects of designing a syllabus and leading class discussions in…

  4. Novel ergonomic postural assessment method (NERPA) using product-process computer aided engineering for ergonomic workplace design.

    PubMed

    Sanchez-Lite, Alberto; Garcia, Manuel; Domingo, Rosario; Angel Sebastian, Miguel

    2013-01-01

    Musculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle. This paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process. The method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.

  5. Design Methods and Practices for Fault Prevention and Management in Spacecraft

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.

    2005-01-01

    Integrated Systems Health Management (ISHM) is intended to become a critical capability for all space, lunar and planetary exploration vehicles and systems at NASA. Monitoring and managing the health state of diverse components, subsystems, and systems is a difficult task that will become more challenging when implemented for long-term, evolving deployments. A key technical challenge will be to ensure that the ISHM technologies are reliable, effective, and low cost, resulting in turn in safe, reliable, and affordable missions. To ensure safety and reliability, ISHM functionality, decisions and knowledge have to be incorporated into the product lifecycle as early as possible, and ISHM must be considered as an essential element of models developed and used in various stages during system design. During early stage design, many decisions and tasks are still open, including sensor and measurement point selection, modeling and model-checking, diagnosis, signature and data fusion schemes, presenting the best opportunity to catch and prevent potential failures and anomalies in a cost-effective way. Using appropriate formal methods during early design, the design teams can systematically explore risks without committing to design decisions too early. However, the nature of ISHM knowledge and data is detailed, relying on high-fidelity, detailed models, whereas the earlier stages of the product lifecycle utilize low-fidelity, high-level models of systems and their functionality. We currently lack the tools and processes necessary for integrating ISHM into the vehicle system/subsystem design. As a result, most existing ISHM-like technologies are retrofits that were done after the system design was completed. It is very expensive, and sometimes futile, to retrofit a system health management capability into existing systems. Last-minute retrofits result in unreliable systems, ineffective solutions, and excessive costs (e.g., Space Shuttle TPS monitoring which was considered

  6. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  7. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  8. Design method for multi-user workstations utilizing anthropometry and preference data.

    PubMed

    Mahoney, Joseph M; Kurczewski, Nicolas A; Froede, Erick W

    2015-01-01

    Past efforts have been made to design single-user workstations to accommodate users' anthropometric and preference distributions. However, there is a lack of methods for designing workstations for group interaction. This paper introduces a method for sizing workstations to allow for a personal work area for each user and a shared space for adjacent users. We first create a virtual population with the same anthropometric and preference distributions as an intended demographic of college-aged students. Members of the virtual population are randomly paired to test if their extended reaches overlap but their normal reaches do not. This process is repeated in a Monte Carlo simulation to estimate the total percentage of groups in the population that will be accommodated for a workstation size. We apply our method to two test cases: in the first, we size polygonal workstations for two populations and, in the second, we dimension circular workstations for different group sizes. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. An experimental design method leading to chemical Turing patterns.

    PubMed

    Horváth, Judit; Szalai, István; De Kepper, Patrick

    2009-05-08

    Chemical reaction-diffusion patterns often serve as prototypes for pattern formation in living systems, but only two isothermal single-phase reaction systems have produced sustained stationary reaction-diffusion patterns so far. We designed an experimental method to search for additional systems on the basis of three steps: (i) generate spatial bistability by operating autoactivated reactions in open spatial reactors; (ii) use an independent negative-feedback species to produce spatiotemporal oscillations; and (iii) induce a space-scale separation of the activatory and inhibitory processes with a low-mobility complexing agent. We successfully applied this method to a hydrogen-ion autoactivated reaction, the thiourea-iodate-sulfite (TuIS) reaction, and noticeably produced stationary hexagonal arrays of spots and parallel stripes of pH patterns attributed to a Turing bifurcation. This method could be extended to biochemical reactions.

  10. Using Backward Design in Education Research: A Research Methods Essay †

    PubMed Central

    Jensen, Jamie L.; Bailey, Elizabeth G.; Kummer, Tyler A.; Weber, K. Scott

    2017-01-01

    Education research within the STEM disciplines applies a scholarly approach to teaching and learning, with the intent of better understanding how people learn and of improving pedagogy at the undergraduate level. Most of the professionals practicing in this field have ‘crossed over’ from other disciplinary fields and thus have faced challenges in becoming experts in a new discipline. In this article, we offer a novel framework for approaching education research design called Backward Design in Education Research. It is patterned on backward curricular design and provides a three-step, systematic approach to designing education projects: 1) Define a research question that leads to a testable causal hypothesis based on a theoretical rationale; 2) Choose or design the assessment instruments to test the research hypothesis; and 3) Develop an experimental protocol that will be effective in testing the research hypothesis. This approach provides a systematic method to develop and carry out evidence-based research design. PMID:29854045

  11. A simulation-based probabilistic design method for arctic sea transport systems

    NASA Astrophysics Data System (ADS)

    Martin, Bergström; Ove, Erikstad Stein; Sören, Ehlers

    2016-12-01

    When designing an arctic cargo ship, it is necessary to consider multiple stochastic factors. This paper evaluates the merits of a simulation-based probabilistic design method specifically developed to deal with this challenge. The outcome of the paper indicates that the incorporation of simulations and probabilistic design parameters into the design process enables more informed design decisions. For instance, it enables the assessment of the stochastic transport capacity of an arctic ship, as well as of its long-term ice exposure that can be used to determine an appropriate level of ice-strengthening. The outcome of the paper also indicates that significant gains in transport system cost-efficiency can be obtained by extending the boundaries of the design task beyond the individual vessel. In the case of industrial shipping, this allows for instance the consideration of port-based cargo storage facilities allowing for temporary shortages in transport capacity and thus a reduction in the required fleet size / ship capacity.

  12. FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES

    DTIC Science & Technology

    2017-06-01

    FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES BY AMANDA DONNELLY A THESIS...work develops a comparative model for strategic design methodologies, focusing on the primary elements of vision, time, process, communication and...collaboration, and risk assessment. My analysis dissects and compares three potential design methodologies including, net assessment, scenarios and

  13. Design of a rotary dielectric elastomer actuator using a topology optimization method based on pairs of curves

    NASA Astrophysics Data System (ADS)

    Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin

    2018-05-01

    Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.

  14. A Systematic Method of Integrating BIM and Sensor Technology for Sustainable Construction Design

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Deng, Zhiyu

    2017-10-01

    Building Information Modeling (BIM) has received lots of attention of construction field, and sensor technology was applied in construction data collection. This paper developed a method to integrate BIM and sensor technology for sustainable construction design. A brief literature review was conducted to clarify the current development of BIM and sensor technology; then a systematic method for integrating BIM and sensor technology to realize sustainable construction design was put forward; finally a brief discussion and conclusion was given.

  15. Inward-Turning Streamline-Traced Inlet Design Method for Low-Boom, Low-Drag Applications

    NASA Technical Reports Server (NTRS)

    Otto, Samuel; Trefny, Charles J.; Slater, John W.

    2015-01-01

    A new design method for inward-turning, streamline-traced inlets is presented. Resulting designs are intended for moderate supersonic, low-drag, low-boom applications such as that required for NASA's proposed low-boom flight demonstration aircraft. A critical feature of these designs is the internal cowl lip angle that allows for little or no flow turning on the outer nacelle. Present methods using conical-flow Busemann parent flowfields have simply truncated, or otherwise modified the stream-traced contours to include this internal cowl angle. Such modifications disrupt the parent flowfield, reducing inlet performance and flow uniformity. The method presented herein merges a conical flowfield that includes a leading shock with a truncated Busemann flowfield in a manner that minimizes unwanted interactions. A leading internal cowl angle is now inherent in the parent flowfield, and inlet contours traced from this flowfield retain its high performance and good flow uniformity. CFD analysis of a candidate inlet design is presented that verifies the design technique, and reveals a starting issue with the basic geometry. A minor modification to the cowl lip region is shown to eliminate this phenomenon, thereby allowing starting and smooth transition to sub-critical operation as back-pressure is increased. An inlet critical-point total pressure recovery of 96 is achieved based on CFD results for a Mach 1.7 freestream design. Correction for boundary-layer displacement thickness, and sizing for a given engine airflow requirement are also discussed.

  16. Ada Software Design Methods Formulation.

    DTIC Science & Technology

    1982-10-01

    aside for one-to-one, non -judgemental discussions between SofTech and the design teams. SofTech’s role in the meetings was to address any Ada-specific...assurance 1.0 prepare version audits 1.0 monitoring contracts 1.0 library control 1.0 other development 1.0 correspondence 1.0 conduct support design ...quality assurance 2.0 Control Board update training manuals 2.0 participation 2.5 being trained 2.0 formulation of policy 2.5 functional system design

  17. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  18. Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya

    2003-01-01

    The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic

  19. Design of two-dimensional zero reference codes with cross-entropy method.

    PubMed

    Chen, Jung-Chieh; Wen, Chao-Kai

    2010-06-20

    We present a cross-entropy (CE)-based method for the design of optimum two-dimensional (2D) zero reference codes (ZRCs) in order to generate a zero reference signal for a grating measurement system and achieve absolute position, a coordinate origin, or a machine home position. In the absence of diffraction effects, the 2D ZRC design problem is known as the autocorrelation approximation. Based on the properties of the autocorrelation function, the design of the 2D ZRC is first formulated as a particular combination optimization problem. The CE method is then applied to search for an optimal 2D ZRC and thus obtain the desirable zero reference signal. Computer simulation results indicate that there are 15.38% and 14.29% reductions in the second maxima value for the 16x16 grating system with n(1)=64 and the 100x100 grating system with n(1)=300, respectively, where n(1) is the number of transparent pixels, compared with those of the conventional genetic algorithm.

  20. Trimming Line Design using New Development Method and One Step FEM

    NASA Astrophysics Data System (ADS)

    Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol

    2005-08-01

    In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.

  1. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship – Quasi-Experimental Designs

    PubMed Central

    Schweizer, Marin L.; Braun, Barbara I.; Milstone, Aaron M.

    2016-01-01

    Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt non-randomized interventions. Quasi-experimental studies can be categorized into three major types: interrupted time series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. PMID:27267457

  2. Design methods for muskeg area roads

    DOT National Transportation Integrated Search

    1982-02-01

    The objective of this report was to produce a design guide related to roadway design and construction in organic terrain referred as "Muskeg". The report covers the latest design and construction principles so that the designers, planners, builders a...

  3. [Continual improvement of quantitative analytical method development of Panax notogineng saponins based on quality by design].

    PubMed

    Dai, Sheng-Yun; Xu, Bing; Shi, Xin-Yuan; Xu, Xiang; Sun, Ying-Qiang; Qiao, Yan-Jiang

    2017-03-01

    This study is aimed to propose a continual improvement strategy based on quality by design (QbD). An ultra high performance liquid chromatography (UPLC) method was developed to accomplish the method transformation from HPLC to UPLC of Panax notogineng saponins (PNS) and achieve the continual improvement of PNS based on QbD, for example. Plackett-Burman screening design and Box-Behnken optimization design were employed to further understand the relationship between the critical method parameters (CMPs) and critical method attributes (CMAs). And then the Bayesian design space was built. The separation degree of the critical peaks (ginsenoside Rg₁ and ginsenoside Re) was over 2.0 and the analysis time was less than 17 min by a method chosen from the design space with 20% of the initial concentration of the acetonitrile, 10 min of the isocratic time and 6%•min⁻¹ of the gradient slope. At last, the optimum method was validated by accuracy profile. Based on the same analytical target profile (ATP), the comparison of HPLC and UPLC including chromatograph method, CMA identification, CMP-CMA model and system suitability test (SST) indicated that the UPLC method could shorten the analysis time, improve the critical separation and satisfy the requirement of the SST. In all, HPLC method could be replaced by UPLC for the quantity analysis of PNS. Copyright© by the Chinese Pharmaceutical Association.

  4. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based

  5. Business Process Design Method Based on Business Event Model for Enterprise Information System Integration

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takashi; Komoda, Norihisa

    The traditional business process design methods, in which the usecase is the most typical, have no useful framework to design the activity sequence with. Therefore, the design efficiency and quality vary widely according to the designer’s experience and skill. In this paper, to solve this problem, we propose the business events and their state transition model (a basic business event model) based on the language/action perspective, which is the result in the cognitive science domain. In the business process design, using this model, we decide event occurrence conditions so that every event synchronizes with each other. We also propose the design pattern to decide the event occurrence condition (a business event improvement strategy). Lastly, we apply the business process design method based on the business event model and the business event improvement strategy to the credit card issue process and estimate its effect.

  6. A method of designing smartphone interface based on the extended user's mental model

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Li, Fengmin; Bian, Jiali; Pan, Juchen; Song, Song

    2017-01-01

    The user's mental model is the core guiding theory of product design, especially practical products. The essence of practical product is a tool which is used by users to meet their needs. Then, the most important feature of a tool is usability. The design method based on the user's mental model provides a series of practical and feasible theoretical guidance for improving the usability of the product according to the user's awareness of things. In this paper, we propose a method of designing smartphone interface based on the extended user's mental model according to further research on user groups. This approach achieves personalized customization of smartphone application interface and enhance application using efficiency.

  7. Refractive collimation beam shaper design and sensitivity analysis using a free-form profile construction method.

    PubMed

    Tsai, Chung-Yu

    2017-07-01

    A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.

  8. An Empirical Comparison of Five Linear Equating Methods for the NEAT Design

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.

    2009-01-01

    In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…

  9. A Government/Industry Summary of the Design Analysis Methods for Vibrations (DAMVIBS) Program

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G. (Compiler)

    1993-01-01

    The NASA Langley Research Center in 1984 initiated a rotorcraft structural dynamics program, designated DAMVIBS (Design Analysis Methods for VIBrationS), with the objective of establishing the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. An assessment of the program showed that the DAMVIBS Program has resulted in notable technical achievements and major changes in industrial design practice, all of which have significantly advanced the industry's capability to use and rely on finite-element-based dynamics analyses during the design process.

  10. New method of design of nonimaging concentrators.

    PubMed

    Miñano, J C; González, J C

    1992-06-01

    A new method of designing nonimaging concentrators is presented and two new types of concentrators are developed. The first is an aspheric lens, and the second is a lens-mirror combination. A ray tracing of three-dimensional concentrators (with rotational symmetry) is also done, showing that the lens-mirror combination has a total transmission as high as that of the full compound parabolic concentrators, while their depth is much smaller than the classical parabolic mirror-nonimaging concentrator combinations. Another important feature of this concentrator is that the optically active surfaces are not in contact with the receiver, as occurs in other nonimaging concentrators in which the rim of the mirror coincides with the rim of the receiver.

  11. Hardware architecture design of a fast global motion estimation method

    NASA Astrophysics Data System (ADS)

    Liang, Chaobing; Sang, Hongshi; Shen, Xubang

    2015-12-01

    VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.

  12. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  13. Scenario-based design: a method for connecting information system design with public health operations and emergency management.

    PubMed

    Reeder, Blaine; Turner, Anne M

    2011-12-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Interview analysis identified 25 information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create 25 scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  15. Application of the finite element method in orthopedic implant design.

    PubMed

    Saha, Subrata; Roychowdhury, Amit

    2009-01-01

    The finite element method (FEM) was first introduced to the field of orthopedic biomechanics in the early 1970s to evaluate stresses in human bones. By the early 1980s, the method had become well established as a tool for basic research and design analysis. Since the late 1980s and early 1990s, FEM has also been used to study bone remodeling. Today, it is one of the most reliable simulation tools for evaluating wear, fatigue, crack propagation, and so forth, and is used in many types of preoperative testing. Since the introduction of FEM to orthopedic biomechanics, there have been rapid advances in computer processing speeds, the finite element and other numerical methods, understanding of mechanical properties of soft and hard tissues and their modeling, and image-processing techniques. In light of these advances, it is accepted today that FEM will continue to contribute significantly to further progress in the design and development of orthopedic implants, as well as in the understanding of other complex systems of the human body. In the following article, different main application areas of finite element simulation will be reviewed including total hip joint arthroplasty, followed by the knee, spine, shoulder, and elbow, respectively.

  16. Designs and Methods in School Improvement Research: A Systematic Review

    ERIC Educational Resources Information Center

    Feldhoff, Tobias; Radisch, Falk; Bischof, Linda Marie

    2016-01-01

    Purpose: The purpose of this paper is to focus on challenges faced by longitudinal quantitative analyses of school improvement processes and offers a systematic literature review of current papers that use longitudinal analyses. In this context, the authors assessed designs and methods that are used to analyze the relation between school…

  17. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  18. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy

    2001-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  19. Formal methods in the design of Ada 1995

    NASA Technical Reports Server (NTRS)

    Guaspari, David

    1995-01-01

    Formal, mathematical methods are most useful when applied early in the design and implementation of a software system--that, at least, is the familiar refrain. I will report on a modest effort to apply formal methods at the earliest possible stage, namely, in the design of the Ada 95 programming language itself. This talk is an 'experience report' that provides brief case studies illustrating the kinds of problems we worked on, how we approached them, and the extent (if any) to which the results proved useful. It also derives some lessons and suggestions for those undertaking future projects of this kind. Ada 95 is the first revision of the standard for the Ada programming language. The revision began in 1988, when the Ada Joint Programming Office first asked the Ada Board to recommend a plan for revising the Ada standard. The first step in the revision was to solicit criticisms of Ada 83. A set of requirements for the new language standard, based on those criticisms, was published in 1990. A small design team, the Mapping Revision Team (MRT), became exclusively responsible for revising the language standard to satisfy those requirements. The MRT, from Intermetrics, is led by S. Tucker Taft. The work of the MRT was regularly subject to independent review and criticism by a committee of distinguished Reviewers and by several advisory teams--for example, the two User/Implementor teams, each consisting of an industrial user (attempting to make significant use of the new language on a realistic application) and a compiler vendor (undertaking, experimentally, to modify its current implementation in order to provide the necessary new features). One novel decision established the Language Precision Team (LPT), which investigated language proposals from a mathematical point of view. The LPT applied formal mathematical analysis to help improve the design of Ada 95 (e.g., by clarifying the language proposals) and to help promote its acceptance (e.g., by identifying a

  20. Guidance for using mixed methods design in nursing practice research.

    PubMed

    Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia

    2016-08-01

    The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Synthesis of aircraft structures using integrated design and analysis methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Goetz, R. C.

    1978-01-01

    A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.

  2. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  3. Shape design of an optimal comfortable pillow based on the analytic hierarchy process method

    PubMed Central

    Liu, Shuo-Fang; Lee, Yann-Long; Liang, Jung-Chin

    2011-01-01

    Objective Few studies have analyzed the shapes of pillows. The purpose of this study was to investigate the relationship between the pillow shape design and subjective comfort level for asymptomatic subjects. Methods Four basic pillow designs factors were selected on the basis of literature review and recombined into 8 configurations for testing the rank of degrees of comfort. The data were analyzed by the analytic hierarchy process method to determine the most comfortable pillow. Results Pillow number 4 was the most comfortable pillow in terms of head, neck, shoulder, height, and overall comfort. The design factors of pillow number 4 were using a combination of standard, cervical, and shoulder pillows. A prototype of this pillow was developed on the basis of the study results for designing future pillow shapes. Conclusions This study investigated the comfort level of particular users and redesign features of a pillow. A deconstruction analysis would simplify the process of determining the most comfortable pillow design and aid designers in designing pillows for groups. PMID:22654680

  4. Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD)

    PubMed Central

    Rhoades, Seth D.

    2017-01-01

    Introduction Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Objective Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). Methods We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. Results LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. Conclusions The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method. PMID:28348510

  5. The comparison of laser surface designing and pigment printing methods for the product quality

    NASA Astrophysics Data System (ADS)

    Ozguney, Arif Taner

    2007-07-01

    Developing new designs by using the computer and transferring the designs that are obtained to textile surfaces will not only increase and facilitate the production in a more practical manner, but also help you create identical designs. This means serial manufacturing of the products at standard quality and increasing their added values. Moreover, creating textile designs using the laser will also contribute to the value of the product as far as the consumer is concerned because it will not cause any wearing off and deformation in the texture of the fabric unlike the other methods. In the system that has been designed, the laser beam at selected wavelength and intensity was directed onto a selected textile surface and a computer-controlled laser beam source was used to change the colour substances on the textile surface. Pigment printing is also used for designing in textile and apparel sector. In this method, designs are transferred to the fabric manually by using dyestuff. In this study, the denim fabric used for the surfacing trial was 100% cotton, with a weft count per centimeter of 20 and a warp count per centimeter of 27, with fabric weight of 458 g/m 2. The first step was to prepare 40 pieces of denim samples, half of which were prepared manually pigment printing and the other half by using the laser beam. After this, some test applications were done. The tensile strength, tensile extension and some fastness values of designed pieces with two methods were compared according to the international standards.

  6. Novel Ergonomic Postural Assessment Method (NERPA) Using Product-Process Computer Aided Engineering for Ergonomic Workplace Design

    PubMed Central

    Sanchez-Lite, Alberto; Garcia, Manuel; Domingo, Rosario; Angel Sebastian, Miguel

    2013-01-01

    Background Musculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle. Methodology/Principal Findings This paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process. Conclusions The method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method’s usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method. PMID:23977340

  7. A semi-automatic computer-aided method for surgical template design

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  8. A semi-automatic computer-aided method for surgical template design

    PubMed Central

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-01-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method. PMID:26843434

  9. A semi-automatic computer-aided method for surgical template design.

    PubMed

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-04

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  10. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  11. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  12. [Research on the designing method of a special shade guide for tooth whitening].

    PubMed

    Xu, Yingxin

    2015-10-01

    To investigate a method of designing an accurate and scientific shade guide, especially used for judging the effect of tooth whitening, by analyzing the colorimetric values of discolored teeth statistically. One hundred thirty-six pictures of patients who had been receiving the Beyond cold light whitening treatment from February 2009 to July 2014 were analyzed, including 25 tetracycline teeth, 61 mottled-enamel teeth, and 50 yellow teeth. The colorimetric values of discolored teeth were measured. The L* values of shade tabs were calculated by hierarchical clustering of those of discolored teeth. The a* and b* values of shade tabs were the mean of those observed for discolored teeth. Accordingly, different shade guides were designed for each type of discolored teeth, and the effects were evaluated. A statistically significant difference in colorimetric values was found among the three types of discolored teeth. Compared with the Vitapan Classical shade guide, the shade guides designed through the present method were more scientific and accurate in judging the effect of tooth whitening. Moreover, the arrangement of shade tabs was more logical, and the color difference between shade tabs and discolored teeth was smaller. The proposed designing method is theoretically feasible, although its clinical effect has yet to be proven.

  13. Impact design methods for ceramic components in gas turbine engines

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1991-01-01

    Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.

  14. Parametric design of pressure-relieving foot orthosis using statistics-based finite element method.

    PubMed

    Cheung, Jason Tak-Man; Zhang, Ming

    2008-04-01

    Custom-molded foot orthoses are frequently prescribed in routine clinical practice to prevent or treat plantar ulcers in diabetes by reducing the peak plantar pressure. However, the design and fabrication of foot orthosis vary among clinical practitioners and manufacturers. Moreover, little information about the parametric effect of different combinations of design factors is available. As an alternative to the experimental approach, therefore, computational models of the foot and footwear can provide efficient evaluations of different combinations of structural and material design factors on plantar pressure distribution. In this study, a combined finite element and Taguchi method was used to identify the sensitivity of five design factors (arch type, insole and midsole thickness, insole and midsole stiffness) of foot orthosis on peak plantar pressure relief. From the FE predictions, the custom-molded shape was found to be the most important design factor in reducing peak plantar pressure. Besides the use of an arch-conforming foot orthosis, the insole stiffness was found to be the second most important factor for peak pressure reduction. Other design factors, such as insole thickness, midsole stiffness and midsole thickness, contributed to less important roles in peak pressure reduction in the given order. The statistics-based FE method was found to be an effective approach in evaluating and optimizing the design of foot orthosis.

  15. The relation between the mass-to-light ratio and the relaxation state of globular clusters

    NASA Astrophysics Data System (ADS)

    Bianchini, P.; Sills, A.; van de Ven, G.; Sippel, A. C.

    2017-08-01

    The internal dynamics of globular clusters (GCs) is strongly affected by two-body interactions that bring the systems to a state of partial energy equipartition. Using a set of Monte Carlo clusters simulations, we investigate the role of the onset of energy equipartition in shaping the mass-to-light ratio (M/L) in GCs. Our simulations show that the M/L profiles cannot be considered constant and their specific shape strongly depends on the dynamical age of the clusters. Dynamically younger clusters display a central peak up to M/L ≃ 25 M⊙/L⊙ caused by the retention of dark remnants; this peak flattens out for dynamically older clusters. Moreover, we find that also the global values of M/L correlate with the dynamical state of a cluster quantified as either the number of relaxation times a system has experienced nrel or the equipartition parameter meq: clusters closer to full equipartition (higher nrel or lower meq) display a lower M/L. We show that the decrease of M/L is primarily driven by the dynamical ejection of dark remnants, rather than by the escape of low-mass stars. The predictions of our models are in good agreement with observations of GCs in the Milky Way and M31, indicating that differences in relaxation state alone can explain variations of M/L up to a factor of ≃3. Our characterization of the M/L as a function of relaxation state is of primary relevance for the application and interpretation of dynamical models.

  16. Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD).

    PubMed

    Rhoades, Seth D; Weljie, Aalim M

    2016-12-01

    Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method.

  17. Classical Control System Design: A non-Graphical Method for Finding the Exact System Parameters

    NASA Astrophysics Data System (ADS)

    Hussein, Mohammed Tawfik

    2008-06-01

    The Root Locus method of control system design was developed in the 1940's. It is a set of rules that helps in sketching the path traced by the roots of the closed loop characteristic equation of the system, as a parameter such as a controller gain, k, is varied. The procedure provides approximate sketching guidelines. Designs on control systems using the method are therefore not exact. This paper aims at a non-graphical method for finding the exact system parameters to place a pair of complex conjugate poles on a specified damping ratio line. The overall procedure is based on the exact solution of complex equations on the PC using numerical methods.

  18. Human-Automation Integration: Principle and Method for Design and Evaluation

    NASA Technical Reports Server (NTRS)

    Billman, Dorrit; Feary, Michael

    2012-01-01

    Future space missions will increasingly depend on integration of complex engineered systems with their human operators. It is important to ensure that the systems that are designed and developed do a good job of supporting the needs of the work domain. Our research investigates methods for needs analysis. We included analysis of work products (plans for regulation of the space station) as well as work processes (tasks using current software), in a case study of Attitude Determination and Control Officers (ADCO) planning work. This allows comparing how well different designs match the structure of the work to be supported. Redesigned planning software that better matches the structure of work was developed and experimentally assessed. The new prototype enabled substantially faster and more accurate performance in plan revision tasks. This success suggests the approach to needs assessment and use in design and evaluation is promising, and merits investigatation in future research.

  19. A bibliography on formal methods for system specification, design and validation

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Furchtgott, D. G.; Movaghar, A.

    1982-01-01

    Literature on the specification, design, verification, testing, and evaluation of avionics systems was surveyed, providing 655 citations. Journal papers, conference papers, and technical reports are included. Manual and computer-based methods were employed. Keywords used in the online search are listed.

  20. Improved quality-by-design compliant methodology for method development in reversed-phase liquid chromatography.

    PubMed

    Debrus, Benjamin; Guillarme, Davy; Rudaz, Serge

    2013-10-01

    A complete strategy dedicated to quality-by-design (QbD) compliant method development using design of experiments (DOE), multiple linear regressions responses modelling and Monte Carlo simulations for error propagation was evaluated for liquid chromatography (LC). The proposed approach includes four main steps: (i) the initial screening of column chemistry, mobile phase pH and organic modifier, (ii) the selectivity optimization through changes in gradient time and mobile phase temperature, (iii) the adaptation of column geometry to reach sufficient resolution, and (iv) the robust resolution optimization and identification of the method design space. This procedure was employed to obtain a complex chromatographic separation of 15 antipsychotic basic drugs, widely prescribed. To fully automate and expedite the QbD method development procedure, short columns packed with sub-2 μm particles were employed, together with a UHPLC system possessing columns and solvents selection valves. Through this example, the possibilities of the proposed QbD method development workflow were exposed and the different steps of the automated strategy were critically discussed. A baseline separation of the mixture of antipsychotic drugs was achieved with an analysis time of less than 15 min and the robustness of the method was demonstrated simultaneously with the method development phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Designing Feature and Data Parallel Stochastic Coordinate Descent Method forMatrix and Tensor Factorization

    DTIC Science & Technology

    2016-05-11

    AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or   any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386

  2. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  3. The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying

    Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.

  4. A Method of Trajectory Design for Manned Asteroids Exploration

    NASA Astrophysics Data System (ADS)

    Gan, Q. B.; Zhang, Y.; Zhu, Z. F.; Han, W. H.; Dong, X.

    2014-11-01

    A trajectory optimization method of the nuclear propulsion manned asteroids exploration is presented. In the case of launching between 2035 and 2065, based on the Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory in the feasible regions is selected by pruning the flight sequences. Setting the nuclear propulsion flight plan as propel-coast-propel, and taking the minimal mass of aircraft departure as the index, the nuclear propulsion flight trajectory is separately optimized using a hybrid method. With the initial value of the optimized local parameters of each three phases, the global parameters are jointedly optimized. At last, the minimal departure mass trajectory design result is given.

  5. Translating Vision into Design: A Method for Conceptual Design Development

    NASA Technical Reports Server (NTRS)

    Carpenter, Joyce E.

    2003-01-01

    One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.

  6. A new method for designing shock-free transonic configurations

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Fung, K. Y.; Seebass, A. R.; Yu, N. J.

    1978-01-01

    A method for the design of shock free supercritical airfoils, wings, and three dimensional configurations is described. Results illustrating the procedure in two and three dimensions are given. They include modifications to part of the upper surface of an NACA 64A410 airfoil that will maintain shock free flow over a range of Mach numbers for a fixed lift coefficient, and the modifications required on part of the upper surface of a swept wing with an NACA 64A410 root section to achieve shock free flow. While the results are given for inviscid flow, the same procedures can be employed iteratively with a boundary layer calculation in order to achieve shock free viscous designs. With a shock free pressure field the boundary layer calculation will be reliable and not complicated by the difficulties of shock wave boundary layer interaction.

  7. Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.

    PubMed

    Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal

    2018-01-01

    Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.

  8. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  9. Validation of published Stirling engine design methods using engine characteristics from the literature

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1980-01-01

    Four fully disclosed reference engines and five design methods are discussed. So far, the agreement between theory and experiment is about as good for the simpler calculation methods as it is for the more complicated methods, that is, within 20%. For the simpler methods, a one number adjustable constant can be used to reduce the error in predicting power output and efficiency over the entire operating map to less than 10%.

  10. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  11. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  12. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  13. Application of finite element method in mechanical design of automotive parts

    NASA Astrophysics Data System (ADS)

    Gu, Suohai

    2017-09-01

    As an effective numerical analysis method, finite element method (FEM) has been widely used in mechanical design and other fields. In this paper, the development of FEM is introduced firstly, then the specific steps of FEM applications are illustrated and the difficulties of FEM are summarized in detail. Finally, applications of FEM in automobile components such as automobile wheel, steel plate spring, body frame, shaft parts and so on are summarized, compared with related research experiments.

  14. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.

  15. Using Software Design Methods in CALL

    ERIC Educational Resources Information Center

    Ward, Monica

    2006-01-01

    The phrase "software design" is not one that arouses the interest of many CALL practitioners, particularly those from a humanities background. However, software design essentials are simply logical ways of going about designing a system. The fundamentals include modularity, anticipation of change, generality and an incremental approach. While CALL…

  16. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  17. Numerical method to determine mechanical parameters of engineering design in rock masses.

    PubMed

    Xue, Ting-He; Xiang, Yi-Qiang; Guo, Fa-Zhong

    2004-07-01

    This paper proposes a new continuity model for engineering in rock masses and a new schematic method for reporting the engineering of rock continuity. This method can be used to evaluate the mechanics of every kind of medium; and is a new way to determine the mechanical parameters used in engineering design in rock masses. In the numerical simulation, the experimental parameters of intact rock were combined with the structural properties of field rock. The experimental results for orthogonally-jointed rock are given. The results included the curves of the stress-strain relationship of some rock masses, the curve of the relationship between the dimension Delta and the uniaxial pressure-resistant strength sc of these rock masses, and pictures of the destructive procedure of some rock masses in uniaxial or triaxial tests, etc. Application of the method to engineering design in rock masses showed the potential of its application to engineering practice.

  18. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  19. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  20. Methods for comparative evaluation of propulsion system designs for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.

    1976-01-01

    The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.

  1. Contextualizing and assessing the social capital of seniors in congregate housing residences: study design and methods

    PubMed Central

    Moore, Spencer; Shiell, Alan; Haines, Valerie; Riley, Therese; Collier, Carrie

    2005-01-01

    Background This article discusses the study design and methods used to contextualize and assess the social capital of seniors living in congregate housing residences in Calgary, Alberta. The project is being funded as a pilot project under the Institute of Aging, Canadian Institutes for Health Research. Design/Methods Working with seniors living in 5 congregate housing residencies in Calgary, the project uses a mixed method approach to develop grounded measures of the social capital of seniors. The project integrates both qualitative and quantitative methods in a 3-phase research design: 1) qualitative, 2) quantitative, and 3) qualitative. Phase 1 uses gender-specific focus groups; phase 2 involves the administration of individual surveys that include a social network module; and phase 3 uses anamolous-case interviews. Not only does the study design allow us to develop grounded measures of social capital but it also permits us to test how well the three methods work separately, and how well they fit together to achieve project goals. This article describes the selection of the study population, the multiple methods used in the research and a brief discussion of our conceptualization and measurement of social capital. PMID:15836784

  2. Modified method to improve the design of Petlyuk distillation columns.

    PubMed

    Zapiain-Salinas, Javier G; Barajas-Fernández, Juan; González-García, Raúl

    2014-01-01

    A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads.

  3. Modified method to improve the design of Petlyuk distillation columns

    PubMed Central

    2014-01-01

    Background A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. Results The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. Conclusions The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads. PMID:25061476

  4. Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods, and Results for a User Study

    DTIC Science & Technology

    2016-11-01

    Display Design, Methods , and Results for a User Study by Christopher J Garneau and Robert F Erbacher Approved for public...NOV 2016 US Army Research Laboratory Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods ...January 2013–September 2015 4. TITLE AND SUBTITLE Evaluation of Visualization Tools for Computer Network Defense Analysts: Display Design, Methods

  5. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    NASA Astrophysics Data System (ADS)

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-01

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 × 300 mm2. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  6. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with Computer Technology in Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1998-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.

  7. Methods and Strategies: Derby Design Day

    ERIC Educational Resources Information Center

    Kennedy, Katheryn

    2013-01-01

    In this article the author describes the "Derby Design Day" project--a project that paired high school honors physics students with second-grade children for a design challenge and competition. The overall project goals were to discover whether collaboration in a design process would: (1) increase an interest in science; (2) enhance the…

  8. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  9. Comparing problem-based learning and lecture as methods to teach whole-systems design to engineering students

    NASA Astrophysics Data System (ADS)

    Dukes, Michael Dickey

    The objective of this research is to compare problem-based learning and lecture as methods to teach whole-systems design to engineering students. A case study, Appendix A, exemplifying successful whole-systems design was developed and written by the author in partnership with the Rocky Mountain Institute. Concepts to be tested were then determined, and a questionnaire was developed to test students' preconceptions. A control group of students was taught using traditional lecture methods, and a sample group of students was taught using problem-based learning methods. After several weeks, the students were given the same questionnaire as prior to the instruction, and the data was analyzed to determine if the teaching methods were effective in correcting misconceptions. A statistically significant change in the students' preconceptions was observed in both groups on the topic of cost related to the design process. There was no statistically significant change in the students' preconceptions concerning the design process, technical ability within five years, and the possibility of drastic efficiency gains with current technologies. However, the results were inconclusive in determining that problem-based learning is more effective than lecture as a method for teaching the concept of whole-systems design, or vice versa.

  10. Design in mind: eliciting service user and frontline staff perspectives on psychiatric ward design through participatory methods.

    PubMed

    Csipke, Emese; Papoulias, Constantina; Vitoratou, Silia; Williams, Paul; Rose, Diana; Wykes, Til

    2016-01-01

    Psychiatric ward design may make an important contribution to patient outcomes and well-being. However, research is hampered by an inability to assess its effects robustly. This paper reports on a study which deployed innovative methods to capture service user and staff perceptions of ward design. User generated measures of the impact of ward design were developed and tested on four acute adult wards using participatory methodology. Additionally, inpatients took photographs to illustrate their experience of the space in two wards. Data were compared across wards. Satisfactory reliability indices emerged based on both service user and staff responses. Black and minority ethnic (BME) service users and those with a psychosis spectrum diagnosis have more positive views of the ward layout and fixtures. Staff members have more positive views than service users, while priorities of staff and service users differ. Inpatient photographs prioritise hygiene, privacy and control and address symbolic aspects of the ward environment. Participatory and visual methodologies can provide robust tools for an evaluation of the impact of psychiatric ward design on users.

  11. Learning physics: A comparative analysis between instructional design methods

    NASA Astrophysics Data System (ADS)

    Mathew, Easow

    The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p < .01) in academic performance between students who participated in traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in

  12. Unstructured Finite Volume Computational Thermo-Fluid Dynamic Method for Multi-Disciplinary Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    1998-01-01

    This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.

  13. Initial system design method for non-rotationally symmetric systems based on Gaussian brackets and Nodal aberration theory.

    PubMed

    Zhong, Yi; Gross, Herbert

    2017-05-01

    Freeform surfaces play important roles in improving the imaging performance of off-axis optical systems. However, for some systems with high requirements in specifications, the structure of the freeform surfaces could be very complicated and the number of freeform surfaces could be large. That brings challenges in fabrication and increases the cost. Therefore, to achieve a good initial system with minimum aberrations and reasonable structure before implementing freeform surfaces is essential for optical designers. The already existing initial system design methods are limited to certain types of systems. A universal tool or method to achieve a good initial system efficiently is very important. In this paper, based on the Nodal aberration theory and the system design method using Gaussian Brackets, the initial system design method is extended from rotationally symmetric systems to general non-rotationally symmetric systems. The design steps are introduced and on this basis, two off-axis three-mirror systems are pre-designed using spherical shape surfaces. The primary aberrations are minimized using the nonlinear least-squares solver. This work provides insight and guidance for initial system design of off-axis mirror systems.

  14. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  15. On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2004-01-01

    Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.

  16. Incorporating café design principles into End-of-Life discussions: an innovative method for continuing education.

    PubMed

    Kanaskie, Mary Louise

    2011-04-01

    Café design provides an innovative method for conducting continuing education activities. This method was chosen to elicit meaningful conversation based on issues related to End-of-Life care. Café design principles incorporate the following: setting the context, creating hospitable space, exploring questions that matter, encouraging everyone's contributions, connecting diverse perspectives, listening together for insights, and sharing collective discoveries. Key discussion questions were identified from the End-of Life Nursing Education Consortium Core Curriculum. Questions were revised to incorporate the principles of appreciative inquiry, which encourage a shift from traditional methods of problem identification to creation of a positive vision. Participants rated the café design method as an effective way to share their ideas and to stimulate conversation.

  17. Design of horizontal-axis wind turbine using blade element momentum method

    NASA Astrophysics Data System (ADS)

    Bobonea, Andreea; Pricop, Mihai Victor

    2013-10-01

    The study of mathematical models applied to wind turbine design in recent years, principally in electrical energy generation, has become significant due to the increasing use of renewable energy sources with low environmental impact. Thus, this paper shows an alternative mathematical scheme for the wind turbine design, based on the Blade Element Momentum (BEM) Theory. The results from the BEM method are greatly dependent on the precision of the lift and drag coefficients. The basic of BEM method assumes the blade can be analyzed as a number of independent element in spanwise direction. The induced velocity at each element is determined by performing the momentum balance for a control volume containing the blade element. The aerodynamic forces on the element are calculated using the lift and drag coefficient from the empirical two-dimensional wind tunnel test data at the geometric angle of attack (AOA) of the blade element relative to the local flow velocity.

  18. Analytical Methods for a Learning Health System: 2. Design of Observational Studies

    PubMed Central

    Stoto, Michael; Oakes, Michael; Stuart, Elizabeth; Priest, Elisa L.; Savitz, Lucy

    2017-01-01

    The second paper in a series on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review summarizes study design approaches, including choosing appropriate data sources, and methods for design and analysis of natural and quasi-experiments. The primary strength of study design approaches described in this section is that they study the impact of a deliberate intervention in real-world settings, which is critical for external validity. These evaluation designs address estimating the counterfactual – what would have happened if the intervention had not been implemented. At the individual level, epidemiologic designs focus on identifying situations in which bias is minimized. Natural and quasi-experiments focus on situations where the change in assignment breaks the usual links that could lead to confounding, reverse causation, and so forth. And because these observational studies typically use data gathered for patient management or administrative purposes, the possibility of observation bias is minimized. The disadvantages are that one cannot necessarily attribute the effect to the intervention (as opposed to other things that might have changed), and the results do not indicate what about the intervention made a difference. Because they cannot rely on randomization to establish causality, program evaluation methods demand a more careful consideration of the “theory” of the intervention and how it is expected to play out. A logic model describing this theory can help to design appropriate comparisons, account for all influential variables in a model, and help to ensure that evaluation studies focus on the critical intermediate and long-term outcomes as well as possible confounders. PMID:29881745

  19. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  20. Mixed-Methods Design in Biology Education Research: Approach and Uses.

    PubMed

    Warfa, Abdi-Rizak M

    Educational research often requires mixing different research methodologies to strengthen findings, better contextualize or explain results, or minimize the weaknesses of a single method. This article provides practical guidelines on how to conduct such research in biology education, with a focus on mixed-methods research (MMR) that uses both quantitative and qualitative inquiries. Specifically, the paper provides an overview of mixed-methods design typologies most relevant in biology education research. It also discusses common methodological issues that may arise in mixed-methods studies and ways to address them. The paper concludes with recommendations on how to report and write about MMR. © 2016 L. A.-R. M. Warfa. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  1. A novel method for designing and fabricating low-cost facepiece prototypes.

    PubMed

    Joe, Paula S; Shum, Phillip C; Brown, David W; Lungu, Claudiu T

    2014-01-01

    In 2010, the National Institute for Occupational Safety and Health (NIOSH) published new digital head form models based on their recently updated fit-test panel. The new panel, based on the 2000 census to better represent the modern work force, created two additional sizes: Short/Wide and Long/Narrow. While collecting the anthropometric data that comprised the panel, additional three-dimensional data were collected on a subset of the subjects. Within each sizing category, five individuals' three-dimensional data were used to create the new head form models. While NIOSH has recommended a switch to a five-size system for designing respirators, little has been done in assessing the potential benefits of this change. With commercially available elastomeric facepieces available in only three or four size systems, it was necessary to develop the facepieces to enable testing. This study aims to develop a method for designing and fabricating elastomeric facepieces tailored to the new head form designs for use in fit-testing studies. This novel method used computed tomography of a solid silicone facepiece and a number of computer-aided design programs (VolView, ParaView, MEGG3D, and RapidForm XOR) to develop a facepiece model to accommodate the Short/Wide head form. The generated model was given a physical form by means of three-dimensional printing using stereolithography (SLA). The printed model was then used to create a silicone mold from which elastomeric prototypes can be cast. The prototype facepieces were cast in two types of silicone for use in future fit-testing.

  2. The Hip Impact Protection Project: Design and Methods

    PubMed Central

    Barton, Bruce A; Birge, Stanley J; Magaziner, Jay; Zimmerman, Sheryl; Ball, Linda; Brown, Kathleen M; Kiel, Douglas P

    2013-01-01

    Background Nearly 340,000 hip fractures occur each year in the U.S. With current demographic trends, the number of hip fractures is expected to double at least in the next 40 years. Purpose The Hip Impact Protection Project (HIP PRO) was designed to investigate the efficacy and safety of hip protectors in an elderly nursing home population. This paper describes the innovative clustered matched-pair research design used in HIP PRO to overcome the inherent limitations of clustered randomization. Methods Three clinical centers recruited 37 nursing homes to participate in HIP PRO. They were randomized so that the participating residents in that home received hip protectors for either the right or left hip. Informed consent was obtained from either the resident or the resident's responsible party. The target sample size was 580 residents with replacement if they dropped out, had a hip fracture, or died. One of the advantages of the HIP PRO study design was that each resident was his/her own case and control, eliminating imbalances, and there was no confusion over which residents wore pads (or on which hip). Limitations Generalizability of the findings may be limited. Adherence was higher in this study than in other studies because of: (1) the use of a run-in period, (2) staff incentives, and (3) the frequency of adherence assessments. The use of a single pad is not analogous to pad use in the real world and may have caused unanticipated changes in behavior. Fall assessment was not feasible, limiting the ability to analyze fractures as a function of falls. Finally, hip protector designs continue to evolve so that the results generated using this pad may not be applicable to other pad designs. However, information about factors related to adherence will be useful for future studies. Conclusions The clustered matched-pair study design avoided the major problem with previous cluster-randomized investigations of this question – unbalanced risk factors between the

  3. [Again review of research design and statistical methods of Chinese Journal of Cardiology].

    PubMed

    Kong, Qun-yu; Yu, Jin-ming; Jia, Gong-xian; Lin, Fan-li

    2012-11-01

    To re-evaluate and compare the research design and the use of statistical methods in Chinese Journal of Cardiology. Summary the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology all over the year of 2011, and compared the result with the evaluation of 2008. (1) There is no difference in the distribution of the design of researches of between the two volumes. Compared with the early volume, the use of survival regression and non-parameter test are increased, while decreased in the proportion of articles with no statistical analysis. (2) The proportions of articles in the later volume are significant lower than the former, such as 6(4%) with flaws in designs, 5(3%) with flaws in the expressions, 9(5%) with the incomplete of analysis. (3) The rate of correction of variance analysis has been increased, so as the multi-group comparisons and the test of normality. The error rate of usage has been decreased form 17% to 25% without significance in statistics due to the ignorance of the test of homogeneity of variance. Many improvements showed in Chinese Journal of Cardiology such as the regulation of the design and statistics. The homogeneity of variance should be paid more attention in the further application.

  4. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  5. Spacesuit Radiation Shield Design Methods

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Anderson, Brooke M.; Cucinotta, Francis A.; Ware, J.; Zeitlin, Cary J.

    2006-01-01

    Meeting radiation protection requirements during EVA is predominantly an operational issue with some potential considerations for temporary shelter. The issue of spacesuit shielding is mainly guided by the potential of accidental exposure when operational and temporary shelter considerations fail to maintain exposures within operational limits. In this case, very high exposure levels are possible which could result in observable health effects and even be life threatening. Under these assumptions, potential spacesuit radiation exposures have been studied using known historical solar particle events to gain insight on the usefulness of modification of spacesuit design in which the control of skin exposure is a critical design issue and reduction of blood forming organ exposure is desirable. Transition to a new spacesuit design including soft upper-torso and reconfigured life support hardware gives an opportunity to optimize the next generation spacesuit for reduced potential health effects during an accidental exposure.

  6. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  7. Development of a neutronics calculation method for designing commercial type Japanese sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, T.; Shimazu, Y.; Hibi, K.

    2012-07-01

    Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of thismore » project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)« less

  8. Design method of combined protective against space environmental effects on spacecraft

    NASA Astrophysics Data System (ADS)

    Shen, Zicai; Gong, Zizheng; Ding, Yigang; Liu, Yuming; Liu, Yenan

    2016-01-01

    During its projected extended stay in LEO, spacecraft will encounter many environmental factors including energetic particles, ultraviolet radiation, atomic oxygen, and space debris and meteoroids, together with some induced environments such as contamination and discharging. These space environments and their effects have threat to the reliability and lifetime of spacecraft. So, it is important to give a combined design against the threat from space environments and their effects. The space environments and effects are reviewed in this paper firstly. Secondly, the design process and method against space environments are discussed. At last, some advices about protective structure and materials are proposed.

  9. A simple method of calculating Stirling engines for engine design optimization

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1978-01-01

    A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.

  10. A decision-based perspective for the design of methods for systems design

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Muster, Douglas; Shupe, Jon A.; Allen, Janet K.

    1989-01-01

    Organization of material, a definition of decision based design, a hierarchy of decision based design, the decision support problem technique, a conceptual model design that can be manufactured and maintained, meta-design, computer-based design, action learning, and the characteristics of decisions are among the topics covered.

  11. An adaptive two-stage dose-response design method for establishing proof of concept.

    PubMed

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  12. Functional Mobility Testing: A Novel Method to Create Suit Design Requirements

    NASA Technical Reports Server (NTRS)

    England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.

    2008-01-01

    This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.

  13. A New Design Method of Automotive Electronic Real-time Control System

    NASA Astrophysics Data System (ADS)

    Zuo, Wenying; Li, Yinguo; Wang, Fengjuan; Hou, Xiaobo

    Structure and functionality of automotive electronic control system is becoming more and more complex. The traditional manual programming development mode to realize automotive electronic control system can't satisfy development needs. So, in order to meet diversity and speedability of development of real-time control system, combining model-based design approach and auto code generation technology, this paper proposed a new design method of automotive electronic control system based on Simulink/RTW. Fristly, design algorithms and build a control system model in Matlab/Simulink. Then generate embedded code automatically by RTW and achieve automotive real-time control system development in OSEK/VDX operating system environment. The new development mode can significantly shorten the development cycle of automotive electronic control system, improve program's portability, reusability and scalability and had certain practical value for the development of real-time control system.

  14. Mixed-methods designs in mental health services research: a review.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Chamberlain, Patricia; Hurlburt, Michael S; Landsverk, John

    2011-03-01

    Despite increased calls for use of mixed-methods designs in mental health services research, how and why such methods are being used and whether there are any consistent patterns that might indicate a consensus about how such methods can and should be used are unclear. Use of mixed methods was examined in 50 peer-reviewed journal articles found by searching PubMed Central and 60 National Institutes of Health (NIH)-funded projects found by searching the CRISP database over five years (2005-2009). Studies were coded for aims and the rationale, structure, function, and process for using mixed methods. A notable increase was observed in articles published and grants funded over the study period. However, most did not provide an explicit rationale for using mixed methods, and 74% gave priority to use of quantitative methods. Mixed methods were used to accomplish five distinct types of study aims (assess needs for services, examine existing services, develop new or adapt existing services, evaluate services in randomized controlled trials, and examine service implementation), with three categories of rationale, seven structural arrangements based on timing and weighting of methods, five functions of mixed methods, and three ways of linking quantitative and qualitative data. Each study aim was associated with a specific pattern of use of mixed methods, and four common patterns were identified. These studies offer guidance for continued progress in integrating qualitative and quantitative methods in mental health services research consistent with efforts by NIH and other funding agencies to promote their use.

  15. Parameter Studies, time-dependent simulations and design with automated Cartesian methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael

    2005-01-01

    Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.

  16. Development of a three-dimensional multistage inverse design method for aerodynamic matching of axial compressor blading

    NASA Astrophysics Data System (ADS)

    van Rooij, Michael P. C.

    Current turbomachinery design systems increasingly rely on multistage Computational Fluid Dynamics (CFD) as a means to assess performance of designs. However, design weaknesses attributed to improper stage matching are addressed using often ineffective strategies involving a costly iterative loop between blading modification, revision of design intent, and evaluation of aerodynamic performance. A design methodology is presented which greatly improves the process of achieving design-point aerodynamic matching. It is based on a three-dimensional viscous inverse design method which generates the blade camber surface based on prescribed pressure loading, thickness distribution and stacking line. This inverse design method has been extended to allow blading analysis and design in a multi-blade row environment. Blade row coupling was achieved through a mixing plane approximation. Parallel computing capability in the form of MPI has been implemented to reduce the computational time for multistage calculations. Improvements have been made to the flow solver to reach the level of accuracy required for multistage calculations. These include inclusion of heat flux, temperature-dependent treatment of viscosity, and improved calculation of stress components and artificial dissipation near solid walls. A validation study confirmed that the obtained accuracy is satisfactory at design point conditions. Improvements have also been made to the inverse method to increase robustness and design fidelity. These include the possibility to exclude spanwise sections of the blade near the endwalls from the design process, and a scheme that adjusts the specified loading area for changes resulting from the leading and trailing edge treatment. Furthermore, a pressure loading manager has been developed. Its function is to automatically adjust the pressure loading area distribution during the design calculation in order to achieve a specified design objective. Possible objectives are overall

  17. Child/Adolescent Anxiety Multimodal Study (CAMS): rationale, design, and methods

    PubMed Central

    2010-01-01

    Objective To present the design, methods, and rationale of the Child/Adolescent Anxiety Multimodal Study (CAMS), a recently completed federally-funded, multi-site, randomized placebo-controlled trial that examined the relative efficacy of cognitive-behavior therapy (CBT), sertraline (SRT), and their combination (COMB) against pill placebo (PBO) for the treatment of separation anxiety disorder (SAD), generalized anxiety disorder (GAD) and social phobia (SoP) in children and adolescents. Methods Following a brief review of the acute outcomes of the CAMS trial, as well as the psychosocial and pharmacologic treatment literature for pediatric anxiety disorders, the design and methods of the CAMS trial are described. Results CAMS was a six-year, six-site, randomized controlled trial. Four hundred eighty-eight (N = 488) children and adolescents (ages 7-17 years) with DSM-IV-TR diagnoses of SAD, GAD, or SoP were randomly assigned to one of four treatment conditions: CBT, SRT, COMB, or PBO. Assessments of anxiety symptoms, safety, and functional outcomes, as well as putative mediators and moderators of treatment response were completed in a multi-measure, multi-informant fashion. Manual-based therapies, trained clinicians and independent evaluators were used to ensure treatment and assessment fidelity. A multi-layered administrative structure with representation from all sites facilitated cross-site coordination of the entire trial, study protocols and quality assurance. Conclusions CAMS offers a model for clinical trials methods applicable to psychosocial and psychopharmacological comparative treatment trials by using state-of-the-art methods and rigorous cross-site quality controls. CAMS also provided a large-scale examination of the relative and combined efficacy and safety of the best evidenced-based psychosocial (CBT) and pharmacologic (SSRI) treatments to date for the most commonly occurring pediatric anxiety disorders. Primary and secondary results of CAMS will hold

  18. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  19. Curiosity and Pedagogy: A Mixed-Methods Study of Student Experiences in the Design Studio

    ERIC Educational Resources Information Center

    Smith, Korydon H.

    2010-01-01

    Curiosity is often considered the foundation of learning. There is, however, little understanding of how (or if) pedagogy in higher education affects student curiosity, especially in the studio setting of architecture, interior design, and landscape architecture. This study used mixed-methods to investigate curiosity among design students in the…

  20. In Vitro Androgen Bioassays as a Detection Method for Designer Androgens

    PubMed Central

    Cooper, Elliot R.; McGrath, Kristine C. Y.; Heather, Alison K.

    2013-01-01

    Androgens are the class of sex steroids responsible for male sexual characteristics, including increased muscle mass and decreased fat mass. Illicit use of androgen doping can be an attractive option for those looking to enhance sporting performance and/or physical appearance. The use of in vitro bioassays to detect androgens, especially designer or proandrogens, is becoming increasingly important in combating androgen doping associated with nutritional supplements. The nutritional sports supplement market has grown rapidly throughout the past decade. Many of these supplements contain androgens, designer androgens or proandrogens. Many designer or proandrogens cannot be detected by the standard highly-sensitive screening methods such as gas chromatography-mass spectrometry because their chemical structure is unknown. However, in vitro androgen bioassays can detect designer and proandrogens as these assays are not reliant on knowing the chemical structure but instead are based on androgen receptor activation. For these reasons, it may be advantageous to use routine androgen bioassay screening of nutraceutical samples to help curb the increasing problem of androgen doping. PMID:23389345

  1. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  2. A comprehensive method for preliminary design optimization of axial gas turbine stages

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1982-01-01

    A method is presented that performs a rapid, reasonably accurate preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; (3) predictions of expected turbine performance. The method uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with four existing single stage turbines.

  3. Southampton mealtime assistance study: design and methods

    PubMed Central

    2013-01-01

    Background Malnutrition is common in older people in hospital and is associated with adverse clinical outcomes including increased mortality, morbidity and length of stay. This has raised concerns about the nutrition and diet of hospital in-patients. A number of factors may contribute to low dietary intakes in hospital, including acute illness and cognitive impairment among in-patients. The extent to which other factors influence intake such as a lack of help at mealtimes, for patients who require assistance with eating, is uncertain. This study aims to evaluate the effectiveness of using trained volunteer mealtime assistants to help patients on an acute medical ward for older people at mealtimes. Methods/design The study design is quasi-experimental with a before (year one) and after (year two) comparison of patients on the intervention ward and parallel comparison with patients on a control ward in the same department. The intervention in the second year was the provision of trained volunteer mealtime assistance to patients in the intervention ward. There were three components of data collection that were repeated in both years on both wards. The first (primary) outcome was patients’ dietary intake, collected as individual patient records and as ward-level balance data over 24 hour periods. The second was clinical outcome data assessed on admission and discharge from both wards, and 6 and 12 months after discharge. Finally qualitative data on the views and experience of patients, carers, staff and volunteers was collected through interviews and focus groups in both years to allow a mixed-method evaluation of the intervention. Discussion The study will describe the effect of provision of trained volunteer mealtime assistants on the dietary intake of older medical in-patients. The association between dietary intake and clinical outcomes including malnutrition risk, body composition, grip strength, length of hospital stay and mortality will also be determined. An

  4. A method of computer aided design with self-generative models in NX Siemens environment

    NASA Astrophysics Data System (ADS)

    Grabowik, C.; Kalinowski, K.; Kempa, W.; Paprocka, I.

    2015-11-01

    Currently in CAD/CAE/CAM systems it is possible to create 3D design virtual models which are able to capture certain amount of knowledge. These models are especially useful in an automation of routine design tasks. These models are known as self-generative or auto generative and they can behave in an intelligent way. The main difference between the auto generative and fully parametric models consists in the auto generative models ability to self-organizing. In this case design model self-organizing means that aside from the possibility of making of automatic changes of model quantitative features these models possess knowledge how these changes should be made. Moreover they are able to change quality features according to specific knowledge. In spite of undoubted good points of self-generative models they are not so often used in design constructional process which is mainly caused by usually great complexity of these models. This complexity makes the process of self-generative time and labour consuming. It also needs a quite great investment outlays. The creation process of self-generative model consists of the three stages it is knowledge and information acquisition, model type selection and model implementation. In this paper methods of the computer aided design with self-generative models in NX Siemens CAD/CAE/CAM software are presented. There are the five methods of self-generative models preparation in NX with: parametric relations model, part families, GRIP language application, knowledge fusion and OPEN API mechanism. In the paper examples of each type of the self-generative model are presented. These methods make the constructional design process much faster. It is suggested to prepare this kind of self-generative models when there is a need of design variants creation. The conducted research on assessing the usefulness of elaborated models showed that they are highly recommended in case of routine tasks automation. But it is still difficult to distinguish

  5. The transfer function method for gear system dynamics applied to conventional and minimum excitation gearing designs

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1982-01-01

    A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.

  6. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  7. What Informs Practice and What Is Valued in Corporate Instructional Design? A Mixed Methods Study

    ERIC Educational Resources Information Center

    Thompson-Sellers, Ingrid N.

    2012-01-01

    This study used a two-phased explanatory mixed-methods design to explore in-depth what factors are perceived by Instructional Design and Technology (IDT) professionals as impacting instructional design practice, how these factors are valued in the field, and what differences in perspectives exist between IDT managers and non-managers. For phase 1…

  8. New method to design stellarator coils without the winding surface

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-01-01

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.

  9. Re-design of apple pia packaging using quality function deployment method

    NASA Astrophysics Data System (ADS)

    Pulungan, M. H.; Nadira, N.; Dewi, I. A.

    2018-03-01

    This study was aimed to identify the attributes for premium apple pia packaging, to determine the technical response to be carried out by Permata Agro Mandiri Small and Medium Enterprise (SME) and to design a new apple pie packaging acceptable by the SME. The Quality Function Deployment (QFD) method was employed to improve the apple pia packaging design, which consisted of seven stages in data analysis. The results indicated that whats attribute required by the costumers include graphic design, dimensions, capacity, shape, strength, and resistance of packaging. While, the technical responses to be conducted by the SMEs were as follows: attractive visual packaging designs, attractive colors, clear images and information, packaging size dimensions, a larger capacity packaging (more product content), ergonomic premium packaging, not easily torn, and impact resistant packaging materials. The findings further confirmed that the design of premium apple pia packaging accepted by the SMES was the one with the capacity of ten apple pia or 200 g weight, and with rectangular or beam shape form. The packaging material used was a duplex carton with 400 grammage (g/m2), the outer part of the packaging was coated with plastic and the inside was added with duplex carton. The acceptable packaging dimension was 30 cm x 5 cm x 3 cm (L x W x H) with a mix of black and yellow color in the graphical design.

  10. An Ejector Air Intake Design Method for a Novel Rocket-Based Combined-Cycle Rocket Nozzle

    NASA Astrophysics Data System (ADS)

    Waung, Timothy S.

    Rocket-based combined-cycle (RBCC) vehicles have the potential to reduce launch costs through the use of several different air breathing engine cycles, which reduce fuel consumption. The rocket-ejector cycle, in which air is entrained into an ejector section by the rocket exhaust, is used at flight speeds below Mach 2. This thesis develops a design method for an air intake geometry around a novel RBCC rocket nozzle design for the rocket-ejector engine cycle. This design method consists of a geometry creation step in which a three-dimensional intake geometry is generated, and a simple flow analysis step which predicts the air intake mass flow rate. The air intake geometry is created using the rocket nozzle geometry and eight primary input parameters. The input parameters are selected to give the user significant control over the air intake shape. The flow analysis step uses an inviscid panel method and an integral boundary layer method to estimate the air mass flow rate through the intake geometry. Intake mass flow rate is used as a performance metric since it directly affects the amount of thrust a rocket-ejector can produce. The design method results for the air intake operating at several different points along the subsonic portion of the Ariane 4 flight profile are found to under predict mass flow rate by up to 8.6% when compared to three-dimensional computational fluid dynamics simulations for the same air intake.

  11. Inventing and improving ribozyme function: rational design versus iterative selection methods

    NASA Technical Reports Server (NTRS)

    Breaker, R. R.; Joyce, G. F.

    1994-01-01

    Two major strategies for generating novel biological catalysts exist. One relies on our knowledge of biopolymer structure and function to aid in the 'rational design' of new enzymes. The other, often called 'irrational design', aims to generate new catalysts, in the absence of detailed physicochemical knowledge, by using selection methods to search a library of molecules for functional variants. Both strategies have been applied, with considerable success, to the remodeling of existing ribozymes and the development of ribozymes with novel catalytic function. The two strategies are by no means mutually exclusive, and are best applied in a complementary fashion to obtain ribozymes with the desired catalytic properties.

  12. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    NASA Astrophysics Data System (ADS)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  13. A Structure Design Method for Reduction of MRI Acoustic Noise.

    PubMed

    Nan, Jiaofen; Zong, Nannan; Chen, Qiqiang; Zhang, Liangliang; Zheng, Qian; Xia, Yongquan

    2017-01-01

    The acoustic problem of the split gradient coil is one challenge in a Magnetic Resonance Imaging and Linear Accelerator (MRI-LINAC) system. In this paper, we aimed to develop a scheme to reduce the acoustic noise of the split gradient coil. First, a split gradient assembly with an asymmetric configuration was designed to avoid vibration in same resonant modes for the two assembly cylinders. Next, the outer ends of the split main magnet were constructed using horn structures, which can distribute the acoustic field away from patient region. Finally, a finite element method (FEM) was used to quantitatively evaluate the effectiveness of the above acoustic noise reduction scheme. Simulation results found that the noise could be maximally reduced by 6.9 dB and 5.6 dB inside and outside the central gap of the split MRI system, respectively, by increasing the length of one gradient assembly cylinder by 20 cm. The optimized horn length was observed to be 55 cm, which could reduce noise by up to 7.4 dB and 5.4 dB inside and outside the central gap, respectively. The proposed design could effectively reduce the acoustic noise without any influence on the application of other noise reduction methods.

  14. Applying Case-Based Method in Designing Self-Directed Online Instruction: A Formative Research Study

    ERIC Educational Resources Information Center

    Luo, Heng; Koszalka, Tiffany A.; Arnone, Marilyn P.; Choi, Ikseon

    2018-01-01

    This study investigated the case-based method (CBM) instructional-design theory and its application in designing self-directed online instruction. The purpose of this study was to validate and refine the theory for a self-directed online instruction context. Guided by formative research methodology, this study first developed an online tutorial…

  15. Paragogy and Flipped Assessment: Experience of Designing and Running a MOOC on Research Methods

    ERIC Educational Resources Information Center

    Lee, Yenn; Rofe, J. Simon

    2016-01-01

    This study draws on the authors' first-hand experience of designing, developing and delivering (3Ds) a massive open online course (MOOC) entitled "Understanding Research Methods" since 2014, largely but not exclusively for learners in the humanities and social sciences. The greatest challenge facing us was to design an assessment…

  16. Quenching and anisotropy of hydromagnetic turbulent transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karak, Bidya Binay; Brandenburg, Axel; Rheinhardt, Matthias

    2014-11-01

    Hydromagnetic turbulence affects the evolution of large-scale magnetic fields through mean-field effects like turbulent diffusion and the α effect. For stronger fields, these effects are usually suppressed or quenched, and additional anisotropies are introduced. Using different variants of the test-field method, we determine the quenching of the turbulent transport coefficients for the forced Roberts flow, isotropically forced non-helical turbulence, and rotating thermal convection. We see significant quenching only when the mean magnetic field is larger than the equipartition value of the turbulence. Expressing the magnetic field in terms of the equipartition value of the quenched flows, we obtain for themore » quenching exponents of the turbulent magnetic diffusivity about 1.3, 1.1, and 1.3 for Roberts flow, forced turbulence, and convection, respectively. However, when the magnetic field is expressed in terms of the equipartition value of the unquenched flows, these quenching exponents become about 4, 1.5, and 2.3, respectively. For the α effect, the exponent is about 1.3 for the Roberts flow and 2 for convection in the first case, but 4 and 3, respectively, in the second. In convection, the quenching of turbulent pumping follows the same power law as turbulent diffusion, while for the coefficient describing the Ω×J effect nearly the same quenching exponent is obtained as for α. For forced turbulence, turbulent diffusion proportional to the second derivative along the mean magnetic field is quenched much less, especially for larger values of the magnetic Reynolds number. However, we find that in corresponding axisymmetric mean-field dynamos with dominant toroidal field the quenched diffusion coefficients are the same for the poloidal and toroidal field constituents.« less

  17. Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool.

    PubMed

    Jackson, Bret; Coffey, Dane; Thorson, Lauren; Schroeder, David; Ellingson, Arin M; Nuckley, David J; Keefe, Daniel F

    2012-10-01

    In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations.

  18. Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool

    PubMed Central

    Jackson, Bret; Coffey, Dane; Thorson, Lauren; Schroeder, David; Ellingson, Arin M.; Nuckley, David J.

    2017-01-01

    In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations. PMID:28944349

  19. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  20. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  1. A Comparison of Four Linear Equating Methods for the Common-Item Nonequivalent Groups Design Using Simulation Methods. ACT Research Report Series, 2013 (2)

    ERIC Educational Resources Information Center

    Topczewski, Anna; Cui, Zhongmin; Woodruff, David; Chen, Hanwei; Fang, Yu

    2013-01-01

    This paper investigates four methods of linear equating under the common item nonequivalent groups design. Three of the methods are well known: Tucker, Angoff-Levine, and Congeneric-Levine. A fourth method is presented as a variant of the Congeneric-Levine method. Using simulation data generated from the three-parameter logistic IRT model we…

  2. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  3. The method of complex characteristics for design of transonic blade sections

    NASA Technical Reports Server (NTRS)

    Bledsoe, M. R.

    1986-01-01

    A variety of computational methods were developed to obtain shockless or near shockless flow past two-dimensional airfoils. The approach used was the method of complex characteristics, which determines smooth solutions to the transonic flow equations based on an input speed distribution. General results from fluid mechanics are presented. An account of the method of complex characteristics is given including a description of the particular spaces and coordinates, conformal transformations, and numerical procedures that are used. The operation of the computer program COMPRES is presented along with examples of blade sections designed with the code. A user manual is included with a glossary to provide additional information which may be helpful. The computer program in Fortran, including numerous comment cards is listed.

  4. TNSPackage: A Fortran2003 library designed for tensor network state methods

    NASA Astrophysics Data System (ADS)

    Dong, Shao-Jun; Liu, Wen-Yuan; Wang, Chao; Han, Yongjian; Guo, G.-C.; He, Lixin

    2018-07-01

    Recently, the tensor network states (TNS) methods have proven to be very powerful tools to investigate the strongly correlated many-particle physics in one and two dimensions. The implementation of TNS methods depends heavily on the operations of tensors, including contraction, permutation, reshaping tensors, SVD and so on. Unfortunately, the most popular computer languages for scientific computation, such as Fortran and C/C++ do not have a standard library for such operations, and therefore make the coding of TNS very tedious. We develop a Fortran2003 package that includes all kinds of basic tensor operations designed for TNS. It is user-friendly and flexible for different forms of TNS, and therefore greatly simplifies the coding work for the TNS methods.

  5. Design of piezoelectric transformer for DC/DC converter with stochastic optimization method

    NASA Astrophysics Data System (ADS)

    Vasic, Dejan; Vido, Lionel

    2016-04-01

    Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.

  6. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  7. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  8. Selection of the initial design for the two-stage continual reassessment method.

    PubMed

    Jia, Xiaoyu; Ivanova, Anastasia; Lee, Shing M

    2017-01-01

    In the two-stage continual reassessment method (CRM), model-based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. This is appealing to clinicians because it allows a sufficient number of patients to be assigned to each of the lower dose levels before escalating to higher dose levels. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose-escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifying cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose-finding study for which clinicians explicitly stated their desire to assign at least one patient to each of the lower dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided.

  9. Contribution to an effective design method for stationary reaction-diffusion patterns.

    PubMed

    Szalai, István; Horváth, Judit; De Kepper, Patrick

    2015-06-01

    The British mathematician Alan Turing predicted, in his seminal 1952 publication, that stationary reaction-diffusion patterns could spontaneously develop in reacting chemical or biochemical solutions. The first two clear experimental demonstrations of such a phenomenon were not made before the early 1990s when the design of new chemical oscillatory reactions and appropriate open spatial chemical reactors had been invented. Yet, the number of pattern producing reactions had not grown until 2009 when we developed an operational design method, which takes into account the feeding conditions and other specificities of real open spatial reactors. Since then, on the basis of this method, five additional reactions were shown to produce stationary reaction-diffusion patterns. To gain a clearer view on where our methodical approach on the patterning capacity of a reaction stands, numerical studies in conditions that mimic true open spatial reactors were made. In these numerical experiments, we explored the patterning capacity of Rabai's model for pH driven Landolt type reactions as a function of experimentally attainable parameters that control the main time and length scales. Because of the straightforward reversible binding of protons to carboxylate carrying polymer chains, this class of reaction is at the base of the chemistry leading to most of the stationary reaction-diffusion patterns presently observed. We compare our model predictions with experimental observations and comment on agreements and differences.

  10. Preliminary design methods for fiber reinforced composite structures employing a personal computer

    NASA Technical Reports Server (NTRS)

    Eastlake, C. N.

    1986-01-01

    The objective of this project was to develop a user-friendly interactive computer program to be used as an analytical tool by structural designers. Its intent was to do preliminary, approximate stress analysis to help select or verify sizing choices for composite structural members. The approach to the project was to provide a subroutine which uses classical lamination theory to predict an effective elastic modulus for a laminate of arbitrary material and ply orientation. This effective elastic modulus can then be used in a family of other subroutines which employ the familiar basic structural analysis methods for isotropic materials. This method is simple and convenient to use but only approximate, as is appropriate for a preliminary design tool which will be subsequently verified by more sophisticated analysis. Additional subroutines have been provided to calculate laminate coefficient of thermal expansion and to calculate ply-by-ply strains within a laminate.

  11. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    DTIC Science & Technology

    2014-04-01

    surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital

  12. Systematic process synthesis and design methods for cost effective waste minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biegler, L.T.; Grossman, I.E.; Westerberg, A.W.

    We present progress on our work to develop synthesis methods to aid in the design of cost effective approaches to waste minimization. Work continues to combine the approaches of Douglas and coworkers and of Grossmann and coworkers on a hierarchical approach where bounding information allows it to fit within a mixed integer programming approach. We continue work on the synthesis of reactors and of flexible separation processes. In the first instance, we strive for methods we can use to reduce the production of potential pollutants, while in the second we look for ways to recover and recycle solvents.

  13. Design and evaluation of a freeform lens by using a method of luminous intensity mapping and a differential equation

    NASA Astrophysics Data System (ADS)

    Essameldin, Mahmoud; Fleischmann, Friedrich; Henning, Thomas; Lang, Walter

    2017-02-01

    Freeform optical systems are playing an important role in the field of illumination engineering for redistributing the light intensity, because of its capability of achieving accurate and efficient results. The authors have presented the basic idea of the freeform lens design method at the 117th annual meeting of the German Society of Applied Optics (DGAOProceedings). Now, we demonstrate the feasibility of the design method by designing and evaluating a freeform lens. The concepts of luminous intensity mapping, energy conservation and differential equation are combined in designing a lens for non-imaging applications. The required procedures to design a lens including the simulations are explained in detail. The optical performance is investigated by using a numerical simulation of optical ray tracing. For evaluation, the results are compared with another recently published design method, showing the accurate performance of the proposed method using a reduced number of mapping angles. As a part of the tolerance analyses of the fabrication processes, the influence of the light source misalignments (translation and orientation) on the beam-shaping performance is presented. Finally, the importance of considering the extended light source while designing a freeform lens using the proposed method is discussed.

  14. Parametric design and analysis on the landing gear of a planet lander using the response surface method

    NASA Astrophysics Data System (ADS)

    Zheng, Guang; Nie, Hong; Luo, Min; Chen, Jinbao; Man, Jianfeng; Chen, Chuanzhi; Lee, Heow Pueh

    2018-07-01

    The purpose of this paper is to obtain the design parameter-landing response relation for designing the configuration of the landing gear in a planet lander quickly. To achieve this, parametric studies on the landing gear are carried out using the response surface method (RSM), based on a single landing gear landing model validated by experimental results. According to the design of experiment (DOE) results of the landing model, the RS (response surface)-functions of the three crucial landing responses are obtained, and the sensitivity analysis (SA) of the corresponding parameters is performed. Also, two multi-objective optimizations designs on the landing gear are carried out. The analysis results show that the RS (response surface)-model performs well for the landing response design process, with a minimum fitting accuracy of 98.99%. The most sensitive parameters for the three landing response are the design size of the buffers, struts friction and the diameter of the bending beam. Moreover, the good agreement between the simulated model and RS-model results are obtained in two optimized designs, which show that the RS-model coupled with the FE (finite element)-method is an efficient method to obtain the design configuration of the landing gear.

  15. New method to design stellarator coils without the winding surface

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2017-11-06

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less

  16. New method to design stellarator coils without the winding surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less

  17. Using design methods to provide the care that people want and need.

    PubMed

    Erwin, Kim; Krishnan, Jerry A

    2016-01-01

    Kim Erwin is an Assistant Professor at IIT Institute of Design and trained in user-centered design methods, which put people at the center of any problem space so as to develop solutions that better fit their everyday lives, activities and context. Her expertise is in making complex information easier to understand and use. Her research targets communication tools and methods for collaborative knowledge construction built through shared experiences. Her book, Communicating the New: Methods to shape and accelerate innovation focuses on helping teams explore, build and diffuse critical knowledge inside organizations. Jerry Krishnan is a Professor of Medicine and Public Health, and Associate Vice President for Population Health Sciences at the University of Illinois Hospital & Health Sciences System. He pioneered the use of Analytic Hierarchy Process to elicit the expressed needs of stakeholders for research. He previously served as Chair of the US FDA Pulmonary and Allergy Drugs Advisory Committee and is a Principal Investigator in NIH and Patient Centered Outcomes Research Institute (PCORI)-funded research consortia. He chairs the US National Heart, Lung, and Blood Institute (NHLBI) Clinical Trials review committee and the PCORI Improving Healthcare Systems merit review panel.

  18. A Simulation Modeling Approach Method Focused on the Refrigerated Warehouses Using Design of Experiment

    NASA Astrophysics Data System (ADS)

    Cho, G. S.

    2017-09-01

    For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.

  19. Understanding Design Tradeoffs for Health Technologies: A Mixed-Methods Approach

    PubMed Central

    O’Leary, Katie; Eschler, Jordan; Kendall, Logan; Vizer, Lisa M.; Ralston, James D.; Pratt, Wanda

    2017-01-01

    We introduce a mixed-methods approach for determining how people weigh tradeoffs in values related to health and technologies for health self-management. Our approach combines interviews with Q-methodology, a method from psychology uniquely suited to quantifying opinions. We derive the framework for structured data collection and analysis for the Q-methodology from theories of self-management of chronic illness and technology adoption. To illustrate the power of this new approach, we used it in a field study of nine older adults with type 2 diabetes, and nine mothers of children with asthma. Our mixed-methods approach provides three key advantages for health design science in HCI: (1) it provides a structured health sciences theoretical framework to guide data collection and analysis; (2) it enhances the coding of unstructured data with statistical patterns of polarizing and consensus views; and (3) it empowers participants to actively weigh competing values that are most personally significant to them. PMID:28804794

  20. [Studying on purification technology of Resina Draconis phenol extracts based on design space method].

    PubMed

    Zhang, Jian; Zhang, Xin; Bi, Yu-An; Xu, Gui-Hong; Huang, Wen-Zhe; Wang, Zhen-Zhong; Xiao, Wei

    2017-09-01

    The "design space" method was used to optimize the purification process of Resina Draconis phenol extracts by using the concept of "quality derived from design" (QbD). The content and transfer rate of laurin B and 7,4'-dihydroxyflavone and yield of extract were selected as the critical quality attributes (CQA). Plackett-Burman design showed that the critical process parameters (CPP) were concentration of alkali, the amount of alkali and the temperature of alkali dissolution. Then the Box-Behnken design was used to establish the mathematical model between CQA and CPP. The variance analysis results showed that the P values of the five models were less than 0.05 and the mismatch values were all greater than 0.05, indicating that the model could well describe the relationship between CQA and CPP. Finally, the control limits of the above 5 indicators (content and transfer rate of laurine B and 7,4'-dihydroxyflavone, as well as the extract yield) were set, and then the probability-based design space was calculated by Monte Carlo simulation and verified. The results of the design space validation showed that the optimized purification method can ensure the stability of the Resina Draconis phenol extracts refining process, which would help to improve the quality uniformity between batches of phenol extracts and provide data support for production automation control. Copyright© by the Chinese Pharmaceutical Association.