A Design Study of Co-Splitting as Situated in the Equipartitioning Learning Trajectory
ERIC Educational Resources Information Center
Corley, Andrew Kent
2013-01-01
The equipartitioning learning trajectory (Confrey, Maloney, Nguyen, Mojica & Myers, 2009) has been hypothesized and the proficiency levels have been validated through much prior work. This study solidifies understanding of the upper level of co-splitting, which has been redefined through further clinical interview work (Corley, Confrey &…
Equipartitioning in a high current proton linac
Young, L.M.
1997-08-01
The code PARMILA simulates the beam transmission through the Accelerator for the Production of Tritium (APT) linac. The beam is equipartitioned when the longitudinal and transverse temperatures are equal. This paper explores the consequence of equipartitioning in the APT linac. The simulations begin with a beam that starts at the ion-source plasma surface. PARMILA tracks the particles from the RFQ exit through the 1.7-GeV linac. This paper compares two focusing schemes. One scheme uses mostly equal strength quadrupoles. The equipartitioning scheme uses weaker focusing in the high-energy portion of the linac. The RMS beam size with the equipartitioning scheme is larger, but the relative size of the halo is less than in the equal-strength design.
The two Faces of Equipartition
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Perton, M.; Rodriguez-Castellanos, A.; Campillo, M.; Weaver, R. L.; Rodriguez, M.; Prieto, G.; Luzon, F.; McGarr, A.
2008-12-01
Equipartition is good. Beyond its philosophical implications, in many instances of statistical physics it implies that the available kinetic and potential elastic energy, in phase space, is distributed in the same fixed proportions among the possible "states". There are at least two distinct and complementary descriptions of such states in a diffuse elastic wave field u(r,t). One asserts that u may be represented as an incoherent isotropic superposition of incident plane waves of different polarizations. Each type of wave has an appropriate share of the available energy. This definition introduced by Weaver is similar to the room acoustics notion of a diffuse field, and it suffices to permit prediction of field correlations. The other description assumes that the degrees of freedom of the system, in this case, the kinetic energy densities, are all incoherently excited with equal expected amplitude. This definition, introduced by Maxwell, is also familiar from room acoustics using the normal modes of vibration within an arbitrarily large body. Usually, to establish if an elastic field is diffuse and equipartitioned only the first description has been applied, which requires the separation of dilatational and shear waves using carefully designed experiments. When the medium is bounded by an interface, waves of other modes, for example Rayleigh waves, complicate the measurement of these energies. As a consequence, it can be advantageous to use the second description. Moreover, each spatial component of the energy densities is linked, when an elastic field is diffuse and equipartitioned, to the component of the imaginary part of the Green function at the source. Accordingly, one can use the second description to retrieve the Green function and obtain more information about the medium. The equivalence between the two descriptions of equipartition are given for an infinite space and extended to the case of a half-space. These two descriptiosn are equivalent thanks to the
Equipartition Principle for Internal Coordinate Molecular Dynamics.
Jain, Abhinandan; Park, In-Hee; Vaidehi, Nagarajan
2012-08-14
The principle of equipartition of (kinetic) energy for all-atom Cartesian molecular dynamics states that each momentum phase space coordinate on the average has ½kT of kinetic energy in a canonical ensemble. This principle is used in molecular dynamics simulations to initialize velocities, and to calculate statistical properties such as entropy. Internal coordinate molecular dynamics (ICMD) models differ from Cartesian models in that the overall kinetic energy depends on the generalized coordinates and includes cross-terms. Due to this coupled structure, no such equipartition principle holds for ICMD models. In this paper we introduce non-canonical modal coordinates to recover some of the structural simplicity of Cartesian models and develop a new equipartition principle for ICMD models. We derive low-order recursive computational algorithms for transforming between the modal and physical coordinates. The equipartition principle in modal coordinates provides a rigorous method for initializing velocities in ICMD simulations thus replacing the ad hoc methods used until now. It also sets the basis for calculating conformational entropy using internal coordinates.
ERIC Educational Resources Information Center
Confrey, Jere; Maloney, Alan
2015-01-01
Design research studies provide significant opportunities to study new innovations and approaches and how they affect the forms of learning in complex classroom ecologies. This paper reports on a two-week long design research study with twelve 2nd through 4th graders using curricular materials and a tablet-based diagnostic assessment system, both…
ERIC Educational Resources Information Center
Confrey, Jere; Maloney, Alan
2015-01-01
Design research studies provide significant opportunities to study new innovations and approaches and how they affect the forms of learning in complex classroom ecologies. This paper reports on a two-week long design research study with twelve 2nd through 4th graders using curricular materials and a tablet-based diagnostic assessment system, both…
Transport Barriers and Turbulent Equipartition
NASA Astrophysics Data System (ADS)
Volker, Naulin; Jonas, Nycander; Juul, Rasmussen Jens
2000-10-01
Turbulent equipartition and the formation and dynamics of transport barriers in the form of zonal flows are investigated. We consider pressure gradient driven flute modes in an inhomogeneous magnetic field with curvature. Numerical solutions of the model equations on a bounded domain with sources and sinks show that the turbulent fluctuations introduce an equipartition of the relevant Lagrangian invariants by effective mixing. The time averaged equilibrium density and temperature approach the profiles n ~ B and T ~ B^2/3 predicted by turbulent equipartition. However, below a critical aspect ration alpha = L_y/Lx = 3.8 large scale poloidal flows are found to develop. These so-called zonal flows quench the turbulence locally and form barriers for the turbulence flux. These barriers move on the timescale of diffusion. As the turbulence is quenched the Reynoldsstress driving the flows ceases. The transport barrier is then temporarily destroyed, triggering a large transport event. The formation and dynamics of the transport barrier and the related intermittent turbulent flux are investigated.
Observation of equipartition of seismic waves.
Hennino, R; Trégourès, N; Shapiro, N M; Margerin, L; Campillo, M; van Tiggelen, B A; Weaver, R L
2001-04-09
Equipartition is a first principle in wave transport, based on the tendency of multiple scattering to homogenize phase space. We report observations of this principle for seismic waves created by earthquakes in Mexico. We find qualitative agreement with an equipartition model that accounts for mode conversions at the Earth's surface.
Holographic equipartition and the maximization of entropy
NASA Astrophysics Data System (ADS)
Krishna, P. B.; Mathew, Titus K.
2017-09-01
The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.
MODIFIED EQUIPARTITION CALCULATION FOR SUPERNOVA REMNANTS
Arbutina, B.; Urosevic, D.; Andjelic, M. M.; Pavlovic, M. Z.; Vukotic, B.
2012-02-10
Determination of the magnetic field strength in the interstellar medium is one of the more complex tasks of contemporary astrophysics. We can only estimate the order of magnitude of the magnetic field strength by using a few very limited methods. Besides the Zeeman effect and Faraday rotation, the equipartition or minimum-energy calculation is a widespread method for estimating magnetic field strength and energy contained in the magnetic field and cosmic-ray particles by using only the radio synchrotron emission. Despite its approximate character, it remains a useful tool, especially when there are no other data about the magnetic field in a source. In this paper, we give a modified calculation that we think is more appropriate for estimating magnetic field strengths and energetics in supernova remnants (SNRs). We present calculated estimates of the magnetic field strengths for all Galactic SNRs for which the necessary observational data are available. The Web application for calculation of the magnetic field strengths of SNRs is available at http://poincare.matf.bg.ac.rs/{approx}arbo/eqp/.
The modified equipartition calculation for supernova remnants with the spectral index α = 0.5
NASA Astrophysics Data System (ADS)
Urošević, Dejan; Pavlović, Marko Z.; Arbutina, Bojan; Dobardžić, Aleksandra
2015-03-01
Recently, the modified equipartition calculation for supernova remnants (SNRs) has been derived by Arbutina et al. (2012). Their formulae can be used for SNRs with the spectral indices between 0.5 < α < 1. Here, by using approximately the same analytical method, we derive the equipartition formulae useful for SNRs with spectral index α=0.5. These formulae represent next step upgrade of Arbutina et al. (2012) derivation, because among 30 Galactic SNRs with available observational parameters for the equipartition calculation, 16 have spectral index α = 0.5. For these 16 Galactic SNRs we calculated the magnetic field strengths which are approximately 40 per cent higher than those calculated by using Pacholczyk (1970) equipartition and similar to those calculated by using Beck & Krause (2005) calculation.
No energy equipartition in globular clusters
NASA Astrophysics Data System (ADS)
Trenti, Michele; van der Marel, Roeland
2013-11-01
It is widely believed that globular clusters evolve over many two-body relaxation times towards a state of energy equipartition, so that velocity dispersion scales with stellar mass as σ ∝ m-η with η = 0.5. We show here that this is incorrect, using a suite of direct N-body simulations with a variety of realistic initial mass functions and initial conditions. No simulated system ever reaches a state close to equipartition. Near the centre, the luminous main-sequence stars reach a maximum ηmax ≈ 0.15 ± 0.03. At large times, all radial bins convergence on an asymptotic value η∞ ≈ 0.08 ± 0.02. The development of this `partial equipartition' is strikingly similar across our simulations, despite the range of different initial conditions employed. Compact remnants tend to have higher η than main-sequence stars (but still η < 0.5), due to their steeper (evolved) mass function. The presence of an intermediate-mass black hole (IMBH) decreases η, consistent with our previous findings of a quenching of mass segregation under these conditions. All these results can be understood as a consequence of the Spitzer instability for two-component systems, extended by Vishniac to a continuous mass spectrum. Mass segregation (the tendency of heavier stars to sink towards the core) has often been studied observationally, but energy equipartition has not. Due to the advent of high-quality proper motion data sets from the Hubble Space Telescope, it is now possible to measure η for real clusters. Detailed data-model comparisons open up a new observational window on globular cluster dynamics and evolution. A first comparison of our simulations to observations of Omega Cen yields good agreement, supporting the view that globular clusters are not generally in energy equipartition. Modelling techniques that assume equipartition by construction (e.g. multi-mass Michie-King models) are approximate at best.
Lack of Energy Equipartition in Globular Clusters
NASA Astrophysics Data System (ADS)
Trenti, Michele
2013-05-01
Abstract (2,250 Maximum Characters): It is widely believed that globular clusters evolve over many two-body relaxation times toward a state of energy equipartition, so that velocity dispersion scales with stellar mass as σ∝m^{-η} with η=0.5. I will show instead that this is incorrect, using a suite of direct N-body simulations with a variety of realistic initial mass functions and initial conditions. No simulated system ever reaches a state close to equipartition. Near the center, the luminous main-sequence stars reach a maximum η_{max 0.15±0.03. At large times, all radial bins convergence on an asymptotic value η_{∞ 0.08±0.02. The development of this ``partial equipartition'' is strikingly similar across simulations, despite the range of different initial conditions employed. Compact remnants tend to have higher η than main-sequence stars (but still η< 0.5), due to their steeper (evolved) mass function. The presence of an intermediate-mass black hole (IMBH) decreases η, consistent with our previous findings of a quenching of mass segregation under these conditions. All these results can be understood as a consequence of the Spitzer instability for two-component systems, extended by Vishniac to a continuous mass spectrum. Mass segregation (the tendency of heavier stars to sink toward the core) has often been studied observationally, but energy equipartition has not. Due to the advent of high-quality proper motion datasets from the Hubble Space Telescope, it is now possible to measure η for real clusters. Detailed data-model comparisons open up a new observational window on globular cluster dynamics, structure, evolution, initial conditions, and possible IMBHs. A first comparison of my simulations to observations of Omega Cen yields good agreement, supporting the view that globular clusters are not generally in energy equipartition. Modeling techniques that assume equipartition by construction (e.g., multi-mass Michie-King models) are thus approximate
Equipartition theorem and the dynamics of liquids
Levashov, Valentin A.; Egami, Takeshi; Aga, Rachel S; Morris, James R
2008-01-01
In liquids, phonons have a very short lifetime and the total potential energy does not depend linearly on temperature. Thus it may appear that atomic vibrations in liquids cannot be described by the harmonic-oscillator model and that the equipartition theorem for the potential energy is not upheld. In this paper we show that the description of the local atomic dynamics in terms of the atomic-level stresses provides such a description, satisfying the equipartition theorem. To prove this point we carried out molecular-dynamics simulations with several pairwise potentials, including the Lennard-Jones potential, the modified Johnson potential, and the repulsive part of the Johnson potential, at various particle number densities. In all cases studied the total self-energy of the atomic-level stresses followed the (3/2)kBT law. From these results we suggest that the concept of local atomic stresses can provide description of thermodynamic properties of glasses and liquids on the basis of harmonic atomistic excitations. An example of application of this approach to the description of the glass transition temperature in metallic glasses is discussed.
Thermodynamic laws and equipartition theorem in relativistic Brownian motion.
Koide, T; Kodama, T
2011-06-01
We extend the stochastic energetics to a relativistic system. The thermodynamic laws and equipartition theorem are discussed for a relativistic Brownian particle and the first and the second law of thermodynamics in this formalism are derived. The relation between the relativistic equipartition relation and the rate of heat transfer is discussed in the relativistic case together with the nature of the noise term.
Turbulent Equipartition Theory of Toroidal Momentum Pinch
T.S. Hahm, P.H. Diamond, O.D. Gurcan, and G. Rewaldt
2008-01-31
The mode-independet part of magnetic curvature driven turbulent convective (TuroCo) pinch of the angular momentum density [Hahm et al., Phys. Plasmas 14,072302 (2007)] which was originally derived from the gyrokinetic equation, can be interpreted in terms of the turbulent equipartition (TEP) theory. It is shown that the previous results can be obtained from the local conservation of "magnetically weighted angular momentum density," nmi U|| R/B2, and its homogenization due to turbulent flows. It is also demonstrated that the magnetic curvature modification of the parallel acceleration in the nonlinear gyrokinetic equation in the laboratory frame, which was shown to be responsible for the TEP part of the TurCo pinch of angular momentum density in the previous work, is closely related to the Coriolis drift coupling to the perturbed electric field. In addition, the origin of the diffusive flux in the rotating frame is highlighted. Finally, it is illustratd that there should be a difference in scalings between the momentum pinch originated from inherently toroidal effects and that coming from other mechanisms which exist in a simpler geometry.
Turbulent equipartition theory of toroidal momentum pincha)
NASA Astrophysics Data System (ADS)
Hahm, T. S.; Diamond, P. H.; Gurcan, O. D.; Rewoldt, G.
2008-05-01
The mode-independent part of the magnetic curvature driven turbulent convective (TurCo) pinch of the angular momentum density [Hahm et al., Phys. Plasmas 14, 072302 (2007)], which was originally derived from the gyrokinetic equation, can be interpreted in terms of the turbulent equipartition (TEP) theory. It is shown that the previous results can be obtained from the local conservation of "magnetically weighted angular momentum density," nmiU∥R/B2, and its homogenization due to turbulent flows. It is also demonstrated that the magnetic curvature modification of the parallel acceleration in the nonlinear gyrokinetic equation in the laboratory frame, which was shown to be responsible for the TEP part of the TurCo pinch of angular momentum density in the previous work, is closely related to the Coriolis drift coupling to the perturbed electric field. In addition, the origin of the diffusive flux in the rotating frame is highlighted. Finally, it is illustrated that there should be a difference in scalings between the momentum pinch originated from inherently toroidal effects and that coming from other mechanisms that exist in a simpler geometry.
A novel look at energy equipartition in globular clusters
NASA Astrophysics Data System (ADS)
Bianchini, P.; van de Ven, G.; Norris, M. A.; Schinnerer, E.; Varri, A. L.
2016-06-01
Two-body interactions play a major role in shaping the structural and dynamical properties of globular clusters (GCs) over their long-term evolution. In particular, GCs evolve towards a state of partial energy equipartition that induces a mass dependence in their kinematics. By using a set of Monte Carlo cluster simulations evolved in quasi-isolation, we show that the stellar mass dependence of the velocity dispersion σ(m) can be described by an exponential function σ2 ∝ exp (-m/meq), with the parameter meq quantifying the degree of partial energy equipartition of the systems. This simple parametrization successfully captures the behaviour of the velocity dispersion at lower as well as higher stellar masses, that is, the regime where the system is expected to approach full equipartition. We find a tight correlation between the degree of equipartition reached by a GC and its dynamical state, indicating that clusters that are more than about 20 core relaxation times old, have reached a maximum degree of equipartition. This equipartition-dynamical state relation can be used as a tool to characterize the relaxation condition of a cluster with a kinematic measure of the meq parameter. Vice versa, the mass dependence of the kinematics can be predicted knowing the relaxation time solely on the basis of photometric measurements. Moreover, any deviations from this tight relation could be used as a probe of a peculiar dynamical history of a cluster. Finally, our novel approach is important for the interpretation of state-of-the-art Hubble Space Telescope proper motion data, for which the mass dependence of kinematics can now be measured, and for the application of modelling techniques which take into consideration multimass components and mass segregation.
NON-EQUIPARTITION OF ENERGY, MASSES OF NOVA EJECTA, AND TYPE Ia SUPERNOVAE
Shara, Michael M.; Yaron, Ofer; Prialnik, Dina; Kovetz, Attay
2010-04-01
The total masses ejected during classical nova (CN) eruptions are needed to answer two questions with broad astrophysical implications: can accreting white dwarfs be 'pushed over' the Chandrasekhar mass limit to yield type Ia supernovae? Are ultra-luminous red variables a new kind of astrophysical phenomenon, or merely extreme classical novae? We review the methods used to determine nova ejecta masses. Except for the unique case of BT Mon (nova 1939), all nova ejecta mass determinations depend on untested assumptions and multi-parameter modeling. The remarkably simple assumption of equipartition between kinetic and radiated energy (E {sub kin} and E {sub rad}, respectively) in nova ejecta has been invoked as a way around this conundrum for the ultra-luminous red variable in M31. The deduced mass is far larger than that produced by any CN model. Our nova eruption simulations show that radiation and kinetic energy in nova ejecta are very far from being in energy equipartition, with variations of 4 orders of magnitude in the ratio E {sub kin}/E {sub rad} being commonplace. The assumption of equipartition must not be used to deduce nova ejecta masses; any such 'determinations' can be overestimates by a factor of up to 10,000. We data-mined our extensive series of nova simulations to search for correlations that could yield nova ejecta masses. Remarkably, the mass ejected during a nova eruption is dependent only on (and is directly proportional to) E {sub rad}. If we measure the distance to an erupting nova and its bolometric light curve, then E {sub rad} and hence the mass ejected can be directly measured.
Generalized Energy Equipartition in Harmonic Oscillators Driven by Active Baths
NASA Astrophysics Data System (ADS)
Maggi, Claudio; Paoluzzi, Matteo; Pellicciotta, Nicola; Lepore, Alessia; Angelani, Luca; Di Leonardo, Roberto
2014-12-01
We study experimentally and numerically the dynamics of colloidal beads confined by a harmonic potential in a bath of swimming E. coli bacteria. The resulting dynamics is well approximated by a Langevin equation for an overdamped oscillator driven by the combination of a white thermal noise and an exponentially correlated active noise. This scenario leads to a simple generalization of the equipartition theorem resulting in the coexistence of two different effective temperatures that govern dynamics along the flat and the curved directions in the potential landscape.
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1996-01-01
There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.
Brightness temperature - obtaining the physical properties of a non-equipartition plasma
NASA Astrophysics Data System (ADS)
Nokhrina, E. E.
2017-06-01
The limit on the intrinsic brightness temperature, attributed to `Compton catastrophe', has been established being 1012 K. Somewhat lower limit of the order of 1011.5 K is implied if we assume that the radiating plasma is in equipartition with the magnetic field - the idea that explained why the observed cores of active galactic nuclei (AGNs) sustained the limit lower than the `Compton catastrophe'. Recent observations with unprecedented high resolution by the RadioAstron have revealed systematic exceed in the observed brightness temperature. We propose means of estimating the degree of the non-equipartition regime in AGN cores. Coupled with the core-shift measurements, the method allows us to independently estimate the magnetic field strength and the particle number density at the core. We show that the ratio of magnetic energy to radiating plasma energy is of the order of 10-5, which means the flow in the core is dominated by the particle energy. We show that the magnetic field obtained by the brightness temperature measurements may be underestimated. We propose for the relativistic jets with small viewing angles the non-uniform magnetohydrodynamic model and obtain the expression for the magnetic field amplitude about two orders higher than that for the uniform model. These magnetic field amplitudes are consistent with the limiting magnetic field suggested by the `magnetically arrested disc' model.
RADIUS CONSTRAINTS AND MINIMAL EQUIPARTITION ENERGY OF RELATIVISTICALLY MOVING SYNCHROTRON SOURCES
Barniol Duran, Rodolfo; Piran, Tsvi; Nakar, Ehud E-mail: tsvi.piran@mail.huji.ac.il
2013-07-20
A measurement of the synchrotron self-absorption flux and frequency provides tight constraints on the physical size of the source and a robust lower limit on its energy. This lower limit is also a good estimate of the magnetic field and electrons' energy, if the two components are at equipartition. This well-known method was used for decades to study numerous astrophysical sources moving at non-relativistic (Newtonian) speeds. Here, we generalize the Newtonian equipartition theory to sources moving at relativistic speeds including the effect of deviation from spherical symmetry expected in such sources. As in the Newtonian case, minimization of the energy provides an excellent estimate of the emission radius and yields a useful lower limit on the energy. We find that the application of the Newtonian formalism to a relativistic source would yield a smaller emission radius, and would generally yield a larger lower limit on the energy (within the observed region). For sources where the synchrotron-self-Compton component can be identified, the minimization of the total energy is not necessary and we present an unambiguous solution for the parameters of the system.
Kinetic theory of binary particles with unequal mean velocities and non-equipartition energies
NASA Astrophysics Data System (ADS)
Chen, Yanpei; Mei, Yifeng; Wang, Wei
2017-03-01
The hydrodynamic conservation equations and constitutive relations for a binary granular mixture composed of smooth, nearly elastic spheres with non-equipartition energies and different mean velocities are derived. This research is aimed to build three-dimensional kinetic theory to characterize the behaviors of two species of particles suffering different forces. The standard Enskog method is employed assuming a Maxwell velocity distribution for each species of particles. The collision components of the stress tensor and the other parameters are calculated from the zeroth- and first-order approximation. Our results demonstrate that three factors, namely the differences between two granular masses, temperatures and mean velocities all play important roles in the stress-strain relation of the binary mixture, indicating that the assumption of energy equipartition and the same mean velocity may not be acceptable. The collision frequency and the solid viscosity increase monotonously with each granular temperature. The zeroth-order approximation to the energy dissipation varies greatly with the mean velocities of both species of spheres, reaching its peak value at the maximum of their relative velocity.
Do open star clusters evolve towards energy equipartition?
NASA Astrophysics Data System (ADS)
Spera, Mario; Mapelli, Michela; Jeffries, Robin D.
2016-07-01
We investigate whether open clusters (OCs) tend to energy equipartition, by means of direct N-body simulations with a broken power-law mass function. We find that the simulated OCs become strongly mass segregated, but the local velocity dispersion does not depend on the stellar mass for most of the mass range: the curve of the velocity dispersion as a function of mass is nearly flat even after several half-mass relaxation times, regardless of the adopted stellar evolution recipes and Galactic tidal field model. This result holds both if we start from virialized King models and if we use clumpy sub-virial initial conditions. The velocity dispersion of the most massive stars and stellar remnants tends to be higher than the velocity dispersion of the lighter stars. This trend is particularly evident in simulations without stellar evolution. We interpret this result as a consequence of the strong mass segregation, which leads to Spitzer's instability. Stellar winds delay the onset of the instability. Our simulations strongly support the result that OCs do not attain equipartition, for a wide range of initial conditions.
Accretion in Radiative Equipartition (AiRE) Disks
NASA Astrophysics Data System (ADS)
Yazdi, Yasaman K.; Afshordi, Niayesh
2017-07-01
Standard accretion disk theory predicts that the total pressure in disks at typical (sub-)Eddington accretion rates becomes radiation pressure dominated. However, radiation pressure dominated disks are thermally unstable. Since these disks are observed in approximate steady state over the instability timescale, our accretion models in the radiation-pressure-dominated regime (i.e., inner disk) need to be modified. Here, we present a modification to the Shakura & Sunyaev model, where the radiation pressure is in equipartition with the gas pressure in the inner region. We call these flows accretion in radiative equipartition (AiRE) disks. We introduce the basic features of AiRE disks and show how they modify disk properties such as the Toomre parameter and the central temperature. We then show that the accretion rate of AiRE disks is limited from above and below, by Toomre and nodal sonic point instabilities, respectively. The former leads to a strict upper limit on the mass of supermassive black holes as a function of cosmic time (and spin), while the latter could explain the transition between hard and soft states of X-ray binaries.
Turbulent equipartition pinch of toroidal momentum in spherical torus
NASA Astrophysics Data System (ADS)
Hahm, T. S.; Lee, J.; Wang, W. X.; Diamond, P. H.; Choi, G. J.; Na, D. H.; Na, Y. S.; Chung, K. J.; Hwang, Y. S.
2014-12-01
We present a new analytic expression for turbulent equipartition (TEP) pinch of toroidal angular momentum originating from magnetic field inhomogeneity of spherical torus (ST) plasmas. Starting from a conservative modern nonlinear gyrokinetic equation (Hahm et al 1988 Phys. Fluids 31 2670), we derive an expression for pinch to momentum diffusivity ratio without using a usual tokamak approximation of B ∝ 1/R which has been previously employed for TEP momentum pinch derivation in tokamaks (Hahm et al 2007 Phys. Plasmas 14 072302). Our new formula is evaluated for model equilibria of National Spherical Torus eXperiment (NSTX) (Ono et al 2001 Nucl. Fusion 41 1435) and Versatile Experiment Spherical Torus (VEST) (Chung et al 2013 Plasma Sci. Technol. 15 244) plasmas. Our result predicts stronger inward pinch for both cases, as compared to the prediction based on the tokamak formula.
The Generalized Asymptotic Equipartition Property: Necessary and Sufficient Conditions
Harrison, Matthew T.
2011-01-01
Suppose a string X1n=(X1,X2,…,Xn) generated by a memoryless source (Xn)n≥1 with distribution P is to be compressed with distortion no greater than D ≥ 0, using a memoryless random codebook with distribution Q. The compression performance is determined by the “generalized asymptotic equipartition property” (AEP), which states that the probability of finding a D-close match between X1n and any given codeword Y1n, is approximately 2−nR(P, Q, D), where the rate function R(P, Q, D) can be expressed as an infimum of relative entropies. The main purpose here is to remove various restrictive assumptions on the validity of this result that have appeared in the recent literature. Necessary and sufficient conditions for the generalized AEP are provided in the general setting of abstract alphabets and unbounded distortion measures. All possible distortion levels D ≥ 0 are considered; the source (Xn)n≥1 can be stationary and ergodic; and the codebook distribution can have memory. Moreover, the behavior of the matching probability is precisely characterized, even when the generalized AEP is not valid. Natural characterizations of the rate function R(P, Q, D) are established under equally general conditions. PMID:21614133
1989-03-01
Th usr a toente aninteer a thca sms b esta 1 Fp-ocsing 2. Enter P1 values, lwgt, ldig - > 9 Table I give us proper values. Table 1. PARAMETER TABLE...necessary and identify by block number) In this thesis a control systems analysis package is developed using parameter plane methods. It is an interactive...designer is able to choose values of the parameters which provide a good compromise between cost and dynamic behavior. 20 Distribution Availability of
Wilson, David G [Tijeras, NM; Robinett, III, Rush D.
2012-02-21
A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.
1993-08-01
desirability of a rotation as a function of the set of planar angles. Criteria for the symmetry of the design (such as the same set of factor levels for...P is -1. Hence there is no theoretical problem in obtaining rotations of a design; there are only the practical questions Why rotate a design? And...star points, which can be represented in a shorthand notation by the permutations of (±1,0, "’" , 0), and (c) factorial points, which are a two- level
On the Equipartition of Kinetic Energy in an Ideal Gas Mixture
ERIC Educational Resources Information Center
Peliti, L.
2007-01-01
A refinement of an argument due to Maxwell for the equipartition of translational kinetic energy in a mixture of ideal gases with different masses is proposed. The argument is elementary, yet it may work as an illustration of the role of symmetry and independence postulates in kinetic theory. (Contains 1 figure.)
On the Equipartition of Kinetic Energy in an Ideal Gas Mixture
ERIC Educational Resources Information Center
Peliti, L.
2007-01-01
A refinement of an argument due to Maxwell for the equipartition of translational kinetic energy in a mixture of ideal gases with different masses is proposed. The argument is elementary, yet it may work as an illustration of the role of symmetry and independence postulates in kinetic theory. (Contains 1 figure.)
Aircraft digital control design methods
NASA Technical Reports Server (NTRS)
Powell, J. D.; Parsons, E.; Tashker, M. G.
1976-01-01
Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.
D'Silva, S.
1992-01-01
Joy's law states that the line joining the two poles of a bipolar magnetic region (BMR) makes an angle with the latitudinal line, called the tilt, which increases with increase in latitude. If the solar dynamo operates at the bottom of the convection zone and the BMRs on the surface are produced by the fields generated there, then they should obey Joy's law. We give a theoretical model for these tilts, and show that the observations severely constrain the field strength at the bottom of the convection zone between 60 and 160 kG. For fields stronger than 160 kG, magnetic bouyancy dominates over Coriolis force and the tilts produced are very small compared to the observed. Whereas, for fields weaker than 60 kG, Coriolis force dominates over buoyancy and makes them emerge at very high latitudes, well above the typical sunspot latitudes. Fields above 60 kG are an order of magnitude stronger than the fields that can be in energy equipartition with the velocity fields at the bottom of the convection zone. Such strong fields will severely inhibit dynamo action. In addition, we do not know how a dynamo could produce such a strong field. We propose a couple of mechanisms by which equipartition fields could possibly produce BMRs with the observed tilts: (a) Giant cells, if they exist, can dominate over Coriolis force and drag these equipartition fields in their updraughts, (b) Small scale turbulence can itneract with the flux tube and exchange momentum with it, thus suppressing Coriolis force and making them emerge at the sunspot latitudes. We show that these two mechanisms can make equipartition fields emerge at the sunspot latitudes with the proper tilts, provided their sizes are smaller than a couple of hundred kilometers. We also show that special anchoring mechanisms have to be invoked in order to make equipartition fields of any size produce BMRs with the observed tilts.
NASA Astrophysics Data System (ADS)
Webb, Jeremy J.; Vesperini, Enrico
2017-01-01
We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m), where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m) ∝ m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.
Asymptotic equipartition and long time behavior of solutions of a thin-film equation
NASA Astrophysics Data System (ADS)
Carlen, Eric A.; Ulusoy, Süleyman
We investigate the large-time behavior of classical solutions to the thin-film type equation u=-()x. It was shown in previous work of Carrillo and Toscani that for non-negative initial data u that belongs to H(R) and also has a finite mass and second moment, the strong solutions relax in the L(R) norm at an explicit rate to the unique self-similar source type solution with the same mass. The equation itself is gradient flow for an energy functional that controls the H(R) norm, and so it is natural to expect that one should also have convergence in this norm. Carrillo and Toscani raised this question, but their methods, using a different Lyapunov functions that arises in the theory of the porous medium equation, do not directly address this since their Lyapunov functional does not involve derivatives of u. Here we show that the solutions do indeed converge in the H(R) norm at an explicit, but slow, rate. The key to establishing this convergence is an asymptotic equipartition of the excess energy. Roughly speaking, the energy functional whose dissipation drives the evolution through gradient flow consists of two parts: one involving derivatives of u, and one that does not. We show that these must decay at related rates—due to the asymptotic equipartition—and then use the results of Carrillo and Toscani to control the rate for the part that does not depend on derivatives. From this, one gets a rate on the dissipation for all of the excess energy.
Tsai, V.C.
2010-01-01
Recent derivations have shown that when noise in a physical system has its energy equipartitioned into the modes of the system, there is a convenient relationship between the cross correlation of time-series recorded at two points and the Green's function of the system. Here, we show that even when energy is not fully equipartitioned and modes are allowed to be degenerate, a similar (though less general) property holds for equations with wave equation structure. This property can be used to understand why certain seismic noise correlation measurements are successful despite known degeneracy and lack of equipartition on the Earth. No claim to original US government works Journal compilation ?? 2010 RAS.
Modelling the structure of molecular clouds - I. A multiscale energy equipartition
NASA Astrophysics Data System (ADS)
Veltchev, Todor V.; Donkov, Sava; Klessen, Ralf S.
2016-07-01
We present a model for describing the general structure of molecular clouds (MCs) at early evolutionary stages in terms of their mass-size relationship. Sizes are defined through threshold levels at which equipartitions between gravitational, turbulent and thermal energy |W| ˜ f(Ekin + Eth) take place, adopting interdependent scaling relations of velocity dispersion and density and assuming a lognormal density distribution at each scale. Variations of the equipartition coefficient 1 ≤ f ≤ 4 allow for modelling of star-forming regions at scales within the size range of typical MCs (≳4 pc). Best fits are obtained for regions with low or no star formation (Pipe, Polaris) as well for such with star-forming activity but with nearly lognormal distribution of column density (Rosette). An additional numerical test of the model suggests its applicability to cloud evolutionary times prior to the formation of first stars.
Stochastic Methods for Aircraft Design
NASA Technical Reports Server (NTRS)
Pelz, Richard B.; Ogot, Madara
1998-01-01
The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.
Design method of supercavitating pumps
NASA Astrophysics Data System (ADS)
Kulagin, V.; Likhachev, D.; Li, F. C.
2016-05-01
The problem of effective supercavitating (SC) pump is solved, and optimum load distribution along the radius of the blade is found taking into account clearance, degree of cavitation development, influence of finite number of blades, and centrifugal forces. Sufficient accuracy can be obtained using the equivalent flat SC-grid for design of any SC-mechanisms, applying the “grid effect” coefficient and substituting the skewed flow calculated for grids of flat plates with the infinite attached cavitation caverns. This article gives the universal design method and provides an example of SC-pump design.
Energy equipartitioning in the classical time-dependent Hartree approximation
NASA Astrophysics Data System (ADS)
Straub, John E.; Karplus, Martin
1991-05-01
In the classical time-dependent Hartree approximation (TDH), the dynamics of a single molecule is approximated by that of a ``field'' (each field being N ``copies'' of the molecule which are transparent to one another while interacting with the system via a scaled force). It is shown that when some molecules are represented by a field of copies, while other molecules are represented normally, the average kinetic energy of the system increases linearly with the number of copies and diverges in the limit of large N. Nevertheless, the TDH method with appropriate energy scaling can serve as a useful means of enhancing the configurational sampling for problems involving coupled systems with disparate numbers of degrees of freedom.
The properties of jet in luminous blazars under the equipartition condition
NASA Astrophysics Data System (ADS)
Hu, Wen; Dai, Ben-Zhong; Zeng, Wei; Fan, Zhong-Hui; Zhang, Li
2017-04-01
In this work, we study the physical properties of the high-energy (HE) emission region by modeling the quasi-simultaneous multi-wavelength(MWL) spectral energy distributions (SEDs) of 27 Fermi-LAT detected low-synchrotron-peaked (LSP) blazars. We model the jets MWL SEDs in framework of a well accepted single-zone leptonic model including synchrotron self-Compton and external Compton (EC) processes for the jets in a state of equipartition between particle and magnetic field energy densities. In the model the GeV γ-ray spectrum is modeled by a combination of two different external Compton-scattered components: (i) EC scattering of photons coming from disk and broad line region (BLR), and (ii) EC scattering of photons originating from the dust tours (DT) and BLR. We find that the SEDs can be well reproduced by the equipartition model for the most majority of the sources, and the results are in agreement with many recent studies. Our results suggest that the SEDs modelling alone may not provide a significant constraint on the location of the HE emission region if we do not know enough about the physical properties of the external environment.
Two perspectives on equipartition in diffuse elastic fields in three dimensions.
Perton, M; Sánchez-Sesma, F J; Rodríguez-Castellanos, A; Campillo, M; Weaver, R L
2009-09-01
The elastodynamic Green function can be retrieved from the cross correlations of the motions of a diffuse field. To extract the exact Green function, perfect diffuseness of the illuminating field is required. However, the diffuseness of a field relies on the equipartition of energy, which is usually described in terms of the distribution of wave intensity in direction and polarization. In a full three dimensional (3D) elastic space, the transverse and longitudinal waves have energy densities in fixed proportions. On the other hand, there is an alternative point of view that associates equal energies with the independent modes of vibration. These two approaches are equivalent and describe at least two ways in which equipartition occurs. The authors gather theoretical results for diffuse elastic fields in a 3D full-space and extend them to the half-space problem. In that case, the energies undergo conspicuous fluctuations as a function of depth within about one Rayleigh wavelength. The authors derive diffuse energy densities from both approaches and find they are equal. The results derived here are benchmarks, where perfect diffuseness of the illuminating field was assumed. Some practical implications for the normalization of correlations for Green function retrieval arise and they have some bearing for medium imaging.
Phenomenology treatment of magnetohydrodynamic turbulence with non-equipartition and anisotropy
Zhou, Y; Matthaeus, W H
2005-02-07
Magnetohydrodynamics (MHD) turbulence theory, often employed satisfactorily in astrophysical applications, has often focused on parameter ranges that imply nearly equal values of kinetic and magnetic energies and length scales. However, MHD flow may have disparity magnetic Prandtl number, dissimilar kinetic and magnetic Reynolds number, different kinetic and magnetic outer length scales, and strong anisotropy. Here a phenomenology for such ''non-equipartitioned'' MHD flow is discussed. Two conditions are proposed for a MHD flow to transition to strong turbulent flow, extensions of (1) Taylor's constant flux in an inertial range, and (2) Kolmogorov's scale separation between the large and small scale boundaries of an inertial range. For this analysis, the detailed information on turbulence structure is not needed. These two conditions for MHD transition are expected to provide consistent predictions and should be applicable to anisotropic MHD flows, after the length scales are replaced by their corresponding perpendicular components. Second, it is stressed that the dynamics and anisotropy of MHD fluctuations is controlled by the relative strength between the straining effects between eddies of similar size and the sweeping action by the large-eddies, or propagation effect of the large-scale magnetic fields, on the small scales, and analysis of this balance in principle also requires consideration of non-equipartition effects.
Cosmological model from the holographic equipartition law with a modified Rényi entropy
NASA Astrophysics Data System (ADS)
Komatsu, Nobuyoshi
2017-04-01
Cosmological equations were recently derived by Padmanabhan from the expansion of cosmic space due to the difference between the degrees of freedom on the surface and in the bulk in a region of space. In this study, a modified Rényi entropy is applied to Padmanabhan's `holographic equipartition law', by regarding the Bekenstein-Hawking entropy as a nonextensive Tsallis entropy and using a logarithmic formula of the original Rényi entropy. Consequently, the acceleration equation including an extra driving term (such as a time-varying cosmological term) can be derived in a homogeneous, isotropic, and spatially flat universe. When a specific condition is mathematically satisfied, the extra driving term is found to be constant-like as if it is a cosmological constant. Interestingly, the order of the constant-like term is naturally consistent with the order of the cosmological constant measured by observations, because the specific condition constrains the value of the constant-like term.
Surface density of spacetime degrees of freedom from equipartition law in theories of gravity
Padmanabhan, T.
2010-06-15
I show that the principle of equipartition, applied to area elements of a surface {partial_derivative}V which are in equilibrium at the local Davies-Unruh temperature, allows one to determine the surface number density of the microscopic spacetime degrees of freedom in any diffeomorphism invariant theory of gravity. The entropy associated with these degrees of freedom matches with the Wald entropy for the theory. This result also allows one to attribute an entropy density to the spacetime in a natural manner. The field equations of the theory can then be obtained by extremizing this entropy. Moreover, when the microscopic degrees of freedom are in local thermal equilibrium, the spacetime entropy of a bulk region resides on its boundary.
Equipartition of Rotational and Translational Energy in a Dense Granular Gas
NASA Astrophysics Data System (ADS)
Nichol, Kiri; Daniels, Karen E.
2012-01-01
Experiments quantifying the rotational and translational motion of particles in a dense, driven, 2D granular gas floating on an air table reveal that kinetic energy is divided equally between the two translational and one rotational degrees of freedom. This equipartition persists when the particle properties, confining pressure, packing density, or spatial ordering are changed. While the translational velocity distributions are the same for both large and small particles, the angular velocity distributions scale with the particle radius. The probability distributions of all particle velocities have approximately exponential tails. Additionally, we find that the system can be described with a granular Boyle’s law with a van der Waals-like equation of state. These results demonstrate ways in which conventional statistical mechanics can unexpectedly apply to nonequilibrium systems.
Experimental design methods for bioengineering applications.
Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri
2016-01-01
Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.
Review of freeform TIR collimator design methods
NASA Astrophysics Data System (ADS)
Talpur, Taimoor; Herkommer, Alois
2016-04-01
Total internal reflection (TIR) collimators are essential illumination components providing high efficiency and uniformity in a compact geometry. Various illumination design methods have been developed for designing such collimators, including tailoring methods, design via optimization, the mapping and feedback method, and the simultaneous multiple surface (SMS) method. This paper provides an overview of the different methods and compares the performance of the methods along with their advantages and their limitations.
Spacesuit Radiation Shield Design Methods
NASA Technical Reports Server (NTRS)
Wilson, John W.; Anderson, Brooke M.; Cucinotta, Francis A.; Ware, J.; Zeitlin, Cary J.
2006-01-01
Meeting radiation protection requirements during EVA is predominantly an operational issue with some potential considerations for temporary shelter. The issue of spacesuit shielding is mainly guided by the potential of accidental exposure when operational and temporary shelter considerations fail to maintain exposures within operational limits. In this case, very high exposure levels are possible which could result in observable health effects and even be life threatening. Under these assumptions, potential spacesuit radiation exposures have been studied using known historical solar particle events to gain insight on the usefulness of modification of spacesuit design in which the control of skin exposure is a critical design issue and reduction of blood forming organ exposure is desirable. Transition to a new spacesuit design including soft upper-torso and reconfigured life support hardware gives an opportunity to optimize the next generation spacesuit for reduced potential health effects during an accidental exposure.
Directory of Design Support Methods
2005-08-01
design’s resulting unique risk character in comparison with either another competing system and/or the new 100-point worst-case model (above). 5. It...programs, acquire required skills / competencies , find out where the money is being spent and where to allocate resources for greatest impact. ADVISOR Key...required to run one or multiple training programs, gain required skills and competencies as well as find out where the money is being spent - i.e
FEEDBACK DESIGN METHOD REVIEW AND COMPARISON.
ONILLON,E.
1999-03-29
Different methods for feedback designs are compared. These includes classical Proportional Integral Derivative (P. I. D.), state variable based methods like pole placement, Linear Quadratic Regulator (L. Q. R.), H-infinity and p-analysis. These methods are then applied for the design and analysis of the RHIC phase and radial loop, yielding a performance, stability and robustness comparison.
Design for validation, based on formal methods
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1990-01-01
Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.
Design Methods for Clinical Systems
Blum, B.I.
1986-01-01
This paper presents a brief introduction to the techniques, methods and tools used to implement clinical systems. It begins with a taxonomy of software systems, describes the classic approach to development, provides some guidelines for the planning and management of software projects, and finishes with a guide to further reading. The conclusions are that there is no single right way to develop software, that most decisions are based upon judgment built from experience, and that there are tools that can automate some of the better understood tasks.
Guided Design as a Women's Studies Method.
ERIC Educational Resources Information Center
Trobian, Helen R.
Guided Design has great potential as a teaching/learning method for Women's Studies courses. The Guided Design process is organized around the learner's efforts to come up with solutions to a series of carefully designed, open-ended problems. The problems are selected by the teacher according to the skills and subject matter to be learned. The…
Methods for combinatorial and parallel library design.
Schnur, Dora M; Beno, Brett R; Tebben, Andrew J; Cavallaro, Cullen
2011-01-01
Diversity has historically played a critical role in design of combinatorial libraries, screening sets and corporate collections for lead discovery. Large library design dominated the field in the 1990s with methods ranging anywhere from purely arbitrary through property based reagent selection to product based approaches. In recent years, however, there has been a downward trend in library size. This was due to increased information about the desirable targets gleaned from the genomics revolution and to the ever growing availability of target protein structures from crystallography and homology modeling. Creation of libraries directed toward families of receptors such as GPCRs, kinases, nuclear hormone receptors, proteases, etc., replaced the generation of libraries based primarily on diversity while single target focused library design has remained an important objective. Concurrently, computing grids and cpu clusters have facilitated the development of structure based tools that screen hundreds of thousands of molecules. Smaller "smarter" combinatorial and focused parallel libraries replaced those early un-focused large libraries in the twenty-first century drug design paradigm. While diversity still plays a role in lead discovery, the focus of current library design methods has shifted to receptor based methods, scaffold hopping/bio-isostere searching, and a much needed emphasis on synthetic feasibility. Methods such as "privileged substructures based design" and pharmacophore based design still are important methods for parallel and small combinatorial library design. This chapter discusses some of the possible design methods and presents examples where they are available.
Product Development by Design Navigation Method
NASA Astrophysics Data System (ADS)
Nakazawa, Hiromu
Manufacturers must be able to develop new products within a specified time period. This paper discusses a method for developing high performance products from a limited number of experiments, utilizing the concept of “function error”. Unlike conventional methods where the sequence of design, prototyping and experiment must be repeated several times, the proposed method can determine optimal design values directly from experimental data obtained from the first prototype. The theoretical basis of the method is presented, then its effectiveness proven by applying it to design an extrusion machine and a CNC lathe.
Mixed Method Designs in Implementation Research
Aarons, Gregory A.; Horwitz, Sarah; Chamberlain, Patricia; Hurlburt, Michael; Landsverk, John
2010-01-01
This paper describes the application of mixed method designs in implementation research in 22 mental health services research studies published in peer-reviewed journals over the last 5 years. Our analyses revealed 7 different structural arrangements of qualitative and quantitative methods, 5 different functions of mixed methods, and 3 different ways of linking quantitative and qualitative data together. Complexity of design was associated with number of aims or objectives, study context, and phase of implementation examined. The findings provide suggestions for the use of mixed method designs in implementation research. PMID:20967495
Mixed method designs in implementation research.
Palinkas, Lawrence A; Aarons, Gregory A; Horwitz, Sarah; Chamberlain, Patricia; Hurlburt, Michael; Landsverk, John
2011-01-01
This paper describes the application of mixed method designs in implementation research in 22 mental health services research studies published in peer-reviewed journals over the last 5 years. Our analyses revealed 7 different structural arrangements of qualitative and quantitative methods, 5 different functions of mixed methods, and 3 different ways of linking quantitative and qualitative data together. Complexity of design was associated with number of aims or objectives, study context, and phase of implementation examined. The findings provide suggestions for the use of mixed method designs in implementation research.
LCR method: road map for passive design
Morris, W.S.
1983-05-01
Choosing a design tool to estimate the performance of passive solar houses is discussed. One technique is the Load Collector Ratio (LCR) method. This method allows the solar designer to get quick performance estimates plus a feeling for the results that would be obtained by taking a different approach. How to use the LCR method and the results to be obtained from using it are discussed.
Applications of a transonic wing design method
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Smith, Leigh A.
1989-01-01
A method for designing wings and airfoils at transonic speeds using a predictor/corrector approach was developed. The procedure iterates between an aerodynamic code, which predicts the flow about a given geometry, and the design module, which compares the calculated and target pressure distributions and modifies the geometry using an algorithm that relates differences in pressure to a change in surface curvature. The modular nature of the design method makes it relatively simple to couple it to any analysis method. The iterative approach allows the design process and aerodynamic analysis to converge in parallel, significantly reducing the time required to reach a final design. Viscous and static aeroelastic effects can also be accounted for during the design or as a post-design correction. Results from several pilot design codes indicated that the method accurately reproduced pressure distributions as well as the coordinates of a given airfoil or wing by modifying an initial contour. The codes were applied to supercritical as well as conventional airfoils, forward- and aft-swept transport wings, and moderate-to-highly swept fighter wings. The design method was found to be robust and efficient, even for cases having fairly strong shocks.
Impeller blade design method for centrifugal compressors
NASA Technical Reports Server (NTRS)
Jansen, W.; Kirschner, A. M.
1974-01-01
The design of a centrifugal impeller with blades that are aerodynamically efficient, easy to manufacture, and mechanically sound is discussed. The blade design method described here satisfies the first two criteria and with a judicious choice of certain variables will also satisfy stress considerations. The blade shape is generated by specifying surface velocity distributions and consists of straight-line elements that connect points at hub and shroud. The method may be used to design radially elemented and backward-swept blades. The background, a brief account of the theory, and a sample design are described.
Model reduction methods for control design
NASA Technical Reports Server (NTRS)
Dunipace, K. R.
1988-01-01
Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.
NASA Astrophysics Data System (ADS)
Dermer, Charles D.; Yan, Dahai; Zhang, Li; Finke, Justin D.; Lott, Benoit
2015-08-01
Fermi-LAT analyses show that the γ-ray photon spectral indices {{{Γ }}}γ of a large sample of blazars correlate with the ν {F}ν peak synchrotron frequency {ν }s according to the relation {{{Γ }}}γ =d-k{log} {ν }s. The same function, with different constants d and k, also describes the relationship between {{{Γ }}}γ and peak Compton frequency {ν }{{C}}. This behavior is derived analytically using an equipartition blazar model with a log-parabola description of the electron energy distribution (EED). In the Thomson regime, k={k}{EC}=3b/4 for external Compton (EC) processes and k={k}{SSC}=9b/16 for synchrotron self-Compton (SSC) processes, where b is the log-parabola width parameter of the EED. The BL Lac object Mrk 501 is fit with a synchrotron/SSC model given by the log-parabola EED, and is best fit away from equipartition. Corrections are made to the spectral-index diagrams for a low-energy power-law EED and departures from equipartition, as constrained by absolute jet power. Analytic expressions are compared with numerical values derived from self-Compton and EC scattered γ-ray spectra from Lyα broad-line region and IR target photons. The {{{Γ }}}γ versus {ν }s behavior in the model depends strongly on b, with progressively and predictably weaker dependences on γ-ray detection range, variability time, and isotropic γ-ray luminosity. Implications for blazar unification and blazars as ultra-high energy cosmic-ray sources are discussed. Arguments by Ghisellini et al. that the jet power exceeds the accretion luminosity depend on the doubtful assumption that we are viewing at the Doppler angle.
Mixed Methods Research Designs in Counseling Psychology
ERIC Educational Resources Information Center
Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.
2005-01-01
With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…
Mixed Methods Research Designs in Counseling Psychology
ERIC Educational Resources Information Center
Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.
2005-01-01
With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…
Airbreathing hypersonic vehicle design and analysis methods
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.
1996-01-01
The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.
Iterative methods for design sensitivity analysis
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Yoon, B. G.
1989-01-01
A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.
A Method for Designing Conforming Folding Propellers
NASA Technical Reports Server (NTRS)
Litherland, Brandon L.; Patterson, Michael D.; Derlaga, Joseph M.; Borer, Nicholas K.
2017-01-01
As the aviation vehicle design environment expands due to the in flux of new technologies, new methods of conceptual design and modeling are required in order to meet the customer's needs. In the case of distributed electric propulsion (DEP), the use of high-lift propellers upstream of the wing leading edge augments lift at low speeds enabling smaller wings with sufficient takeoff and landing performance. During cruise, however, these devices would normally contribute significant drag if left in a fixed or windmilling arrangement. Therefore, a design that stows the propeller blades is desirable. In this paper, we present a method for designing folding-blade configurations that conform to the nacelle surface when stowed. These folded designs maintain performance nearly identical to their straight, non-folding blade counterparts.
Development of a hydraulic turbine design method
NASA Astrophysics Data System (ADS)
Kassanos, Ioannis; Anagnostopoulos, John; Papantonis, Dimitris
2013-10-01
In this paper a hydraulic turbine parametric design method is presented which is based on the combination of traditional methods and parametric surface modeling techniques. The blade of the turbine runner is described using Bezier surfaces for the definition of the meridional plane as well as the blade angle distribution, and a thickness distribution applied normal to the mean blade surface. In this way, it is possible to define parametrically the whole runner using a relatively small number of design parameters, compared to conventional methods. The above definition is then combined with a commercial CFD software and a stochastic optimization algorithm towards the development of an automated design optimization procedure. The process is demonstrated with the design of a Francis turbine runner.
Preliminary aerothermodynamic design method for hypersonic vehicles
NASA Technical Reports Server (NTRS)
Harloff, G. J.; Petrie, S. L.
1987-01-01
Preliminary design methods are presented for vehicle aerothermodynamics. Predictions are made for Shuttle orbiter, a Mach 6 transport vehicle and a high-speed missile configuration. Rapid and accurate methods are discussed for obtaining aerodynamic coefficients and heat transfer rates for laminar and turbulent flows for vehicles at high angles of attack and hypersonic Mach numbers.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Multidisciplinary Optimization Methods for Preliminary Design
NASA Technical Reports Server (NTRS)
Korte, J. J.; Weston, R. P.; Zang, T. A.
1997-01-01
An overview of multidisciplinary optimization (MDO) methodology and two applications of this methodology to the preliminary design phase are presented. These applications are being undertaken to improve, develop, validate and demonstrate MDO methods. Each is presented to illustrate different aspects of this methodology. The first application is an MDO preliminary design problem for defining the geometry and structure of an aerospike nozzle of a linear aerospike rocket engine. The second application demonstrates the use of the Framework for Interdisciplinary Design Optimization (FIDO), which is a computational environment system, by solving a preliminary design problem for a High-Speed Civil Transport (HSCT). The two sample problems illustrate the advantages to performing preliminary design with an MDO process.
Analysis Method for Quantifying Vehicle Design Goals
NASA Technical Reports Server (NTRS)
Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael
2007-01-01
A document discusses a method for using Design Structure Matrices (DSM), coupled with high-level tools representing important life-cycle parameters, to comprehensively conceptualize a flight/ground space transportation system design by dealing with such variables as performance, up-front costs, downstream operations costs, and reliability. This approach also weighs operational approaches based on their effect on upstream design variables so that it is possible to readily, yet defensively, establish linkages between operations and these upstream variables. To avoid the large range of problems that have defeated previous methods of dealing with the complex problems of transportation design, and to cut down the inefficient use of resources, the method described in the document identifies those areas that are of sufficient promise and that provide a higher grade of analysis for those issues, as well as the linkages at issue between operations and other factors. Ultimately, the system is designed to save resources and time, and allows for the evolution of operable space transportation system technology, and design and conceptual system approach targets.
Axisymmetric inlet minimum weight design method
NASA Technical Reports Server (NTRS)
Nadell, Shari-Beth
1995-01-01
An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Novel Methods for Electromagnetic Simulation and Design
2016-08-03
AFRL-AFOSR-VA-TR-2016-0272 NOVEL METHODS FOR ELECTROMAGNETIC SIMULATION AND DESIGN Leslie Greengard NEW YORK UNIVERSITY 70 WASHINGTON SQUARE S NEW...METHODS FOR ELECTROMAGNETIC SIMULATION AND DESIGN 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0180 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S... electromagnetic scattering in realistic environments involving complex geometry. During the six year performance period (including a one-year no cost extension
Computer-Aided Drug Design Methods.
Yu, Wenbo; MacKerell, Alexander D
2017-01-01
Computational approaches are useful tools to interpret and guide experiments to expedite the antibiotic drug design process. Structure-based drug design (SBDD) and ligand-based drug design (LBDD) are the two general types of computer-aided drug design (CADD) approaches in existence. SBDD methods analyze macromolecular target 3-dimensional structural information, typically of proteins or RNA, to identify key sites and interactions that are important for their respective biological functions. Such information can then be utilized to design antibiotic drugs that can compete with essential interactions involving the target and thus interrupt the biological pathways essential for survival of the microorganism(s). LBDD methods focus on known antibiotic ligands for a target to establish a relationship between their physiochemical properties and antibiotic activities, referred to as a structure-activity relationship (SAR), information that can be used for optimization of known drugs or guide the design of new drugs with improved activity. In this chapter, standard CADD protocols for both SBDD and LBDD will be presented with a special focus on methodologies and targets routinely studied in our laboratory for antibiotic drug discoveries.
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
MAST Propellant and Delivery System Design Methods
NASA Technical Reports Server (NTRS)
Nadeem, Uzair; Mc Cleskey, Carey M.
2015-01-01
A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.
Case study: design? Method? Or comprehensive strategy?
Jones, Colin; Lyons, Christina
2004-01-01
As the case study approach gains popularity in nursing research, questions arise with regard to what it exactly is, and where it appears to fit paradigmatically. Is it a method, a design, are such distinctions important? Colin Jones and Christina Lyons review some of the key issues, with specific emphasis on the use of case study within an interpretevist philosophy.
Financial methods for waterflooding injectate design
Heneman, Helmuth J.; Brady, Patrick V.
2017-08-08
A method of selecting an injectate for recovering liquid hydrocarbons from a reservoir includes designing a plurality of injectates, calculating a net present value of each injectate, and selecting a candidate injectate based on the net present value. For example, the candidate injectate may be selected to maximize the net present value of a waterflooding operation.
Statistical Methods in Algorithm Design and Analysis.
ERIC Educational Resources Information Center
Weide, Bruce W.
The use of statistical methods in the design and analysis of discrete algorithms is explored. The introductory chapter contains a literature survey and background material on probability theory. In Chapter 2, probabilistic approximation algorithms are discussed with the goal of exposing and correcting some oversights in previous work. Chapter 3…
An optimisation method for complex product design
NASA Astrophysics Data System (ADS)
Li, Ni; Yi, Wenqing; Bi, Zhuming; Kong, Haipeng; Gong, Guanghong
2013-11-01
Designing a complex product such as an aircraft usually requires both qualitative and quantitative data and reasoning. To assist the design process, a critical issue is how to represent qualitative data and utilise it in the optimisation. In this study, a new method is proposed for the optimal design of complex products: to make the full use of available data, information and knowledge, qualitative reasoning is integrated into the optimisation process. The transformation and fusion of qualitative and qualitative data are achieved via the fuzzy sets theory and a cloud model. To shorten the design process, parallel computing is implemented to solve the formulated optimisation problems. A parallel adaptive hybrid algorithm (PAHA) has been proposed. The performance of the new algorithm has been verified by a comparison with the results from PAHA and two other existing algorithms. Further, PAHA has been applied to determine the shape parameters of an aircraft model for aerodynamic optimisation purpose.
Acoustic Treatment Design Scaling Methods. Phase 2
NASA Technical Reports Server (NTRS)
Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.
2003-01-01
The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.
3. 6 simplified methods for design
Nickell, R.E.; Yahr, G.T.
1981-01-01
Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed.
Reliability Methods for Shield Design Process
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Wilson, J. W.
2002-01-01
Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.
A novel method to design flexible URAs
NASA Astrophysics Data System (ADS)
Lang, Haitao; Liu, Liren; Yang, Qingguo
2007-05-01
Aperture patterns play a vital role in coded aperture imaging (CAI) applications. In recent years, many approaches were presented to design optimum or near-optimum aperture patterns. Uniformly redundant arrays (URAs) are, undoubtedly, the most successful for constant sidelobe of their periodic autocorrelation function. Unfortunately, the existing methods can only be used to design URAs with a limited number of array sizes and fixed autocorrelation sidelobe-to-peak ratios. In this paper, we present a novel method to design more flexible URAs. Our approach is based on a searching program driven by DIRECT, a global optimization algorithm. We transform the design question to a mathematical model, based on the DIRECT algorithm, which is advantageous for computer implementation. By changing determinative conditions, we obtain two kinds of types of URAs, including the filled URAs which can be constructed by existing methods and the sparse URAs which have never been mentioned by other authors as far as we know. Finally, we carry out an experiment to demonstrate the imaging performance of the sparse URAs.
Optimum Design Methods for Structural Sandwich Panels
1988-01-01
Security ClassificatioN~ Optimum Design Methods for Structural Sandwich Panels M 4, l 12. PERSONAL AUTHOR(S) Gibson, Lorna J. 113a. TYPE OF REPORT 13b...The largest value of GrE , for the 320 kg/m 3 foam for which the crack propagated through the adhesive, corresponds to the surface energy of the...Introduction , The goal of this part of the pro.ect is to find the minimum weight design of a foam core sandwich beam fora given strernth. The optimum value
Optimization methods for alternative energy system design
NASA Astrophysics Data System (ADS)
Reinhardt, Michael Henry
An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study
Waterflooding injectate design systems and methods
Brady, Patrick V.; Krumhansl, James L.
2014-08-19
A method of designing an injectate to be used in a waterflooding operation is disclosed. One aspect includes specifying data representative of chemical characteristics of a liquid hydrocarbon, a connate, and a reservoir rock, of a subterranean reservoir. Charged species at an interface of the liquid hydrocarbon are determined based on the specified data by evaluating at least one chemical reaction. Charged species at an interface of the reservoir rock are determined based on the specified data by evaluating at least one chemical reaction. An extent of surface complexation between the charged species at the interfaces of the liquid hydrocarbon and the reservoir rock is determined by evaluating at least one surface complexation reaction. The injectate is designed and is operable to decrease the extent of surface complexation between the charged species at interfaces of the liquid hydrocarbon and the reservoir rock. Other methods, apparatus, and systems are disclosed.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained
Quality by design compliant analytical method validation.
Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph
2012-01-03
The concept of quality by design (QbD) has recently been adopted for the development of pharmaceutical processes to ensure a predefined product quality. Focus on applying the QbD concept to analytical methods has increased as it is fully integrated within pharmaceutical processes and especially in the process control strategy. In addition, there is the need to switch from the traditional checklist implementation of method validation requirements to a method validation approach that should provide a high level of assurance of method reliability in order to adequately measure the critical quality attributes (CQAs) of the drug product. The intended purpose of analytical methods is directly related to the final decision that will be made with the results generated by these methods under study. The final aim for quantitative impurity assays is to correctly declare a substance or a product as compliant with respect to the corresponding product specifications. For content assays, the aim is similar: making the correct decision about product compliance with respect to their specification limits. It is for these reasons that the fitness of these methods should be defined, as they are key elements of the analytical target profile (ATP). Therefore, validation criteria, corresponding acceptance limits, and method validation decision approaches should be settled in accordance with the final use of these analytical procedures. This work proposes a general methodology to achieve this in order to align method validation within the QbD framework and philosophy. β-Expectation tolerance intervals are implemented to decide about the validity of analytical methods. The proposed methodology is also applied to the validation of analytical procedures dedicated to the quantification of impurities or active product ingredients (API) in drug substances or drug products, and its applicability is illustrated with two case studies.
Methods for structural design at elevated temperatures
NASA Technical Reports Server (NTRS)
Ellison, A. M.; Jones, W. E., Jr.; Leimbach, K. R.
1973-01-01
A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range.
Design analysis, robust methods, and stress classification
Bees, W.J.
1993-01-01
This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.
Block designs in method transfer experiments.
Altan, Stan; Shoung, Jyh-Ming
2008-01-01
Method transfer is a part of the pharmaceutical development process in which an analytical (chemical) procedure developed in one laboratory (typically the research laboratory) is about to be adopted by one or more recipient laboratories (production or commercial operations). The objective is to show that the recipient laboratory is capable of performing the procedure in an acceptable manner. In the course of carrying out a method transfer, other questions may arise related to fixed or random factors of interest, such as analyst, apparatus, batch, supplier of analytical reagents, and so forth. Estimates of reproducibility and repeatability may also be of interest. This article focuses on the application of various block designs that have been found useful in the comprehensive study of method transfer beyond the laboratory effect alone. An equivalence approach to the comparison of laboratories can still be carried out on either the least squares means or subject-specific means of the laboratories to justify a method transfer or to compare analytical methods.
Computational and design methods for advanced imaging
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.
This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.
A structural design decomposition method utilizing substructuring
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1994-01-01
A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.
Neural method of spatiotemporal filter design
NASA Astrophysics Data System (ADS)
Szostakowski, Jaroslaw
1997-10-01
There is a lot of applications in medical imaging, computer vision, and the communications, where the video processing is critical. Although many techniques have been successfully developed for the filtering of the still-images, significantly fewer techniques have been proposed for the filtering of noisy image sequences. In this paper the novel approach to spatio- temporal filtering design is proposed. The multilayer perceptrons and functional-link nets are used for the 3D filtering. The spatio-temporal patterns are creating from real motion video images. The neural networks learn these patterns. The perceptrons with different number of layers and neurons in each layer are tested. Also, the different input functions in functional- link net are searched. The practical examples of the filtering are shown and compared with traditional (non-neural) spatio-temporal methods. The results are very interesting and the neural spatio-temporal filters seems to be very efficient tool for video noise reduction.
Method for designing gas tag compositions
Gross, K.C.
1995-04-11
For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node No. 1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node No. 2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred. 5 figures.
Method for designing gas tag compositions
Gross, Kenny C.
1995-01-01
For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node #1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node #2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred.
Research and Design of Rootkit Detection Method
NASA Astrophysics Data System (ADS)
Liu, Leian; Yin, Zuanxing; Shen, Yuli; Lin, Haitao; Wang, Hongjiang
Rootkit is one of the most important issues of network communication systems, which is related to the security and privacy of Internet users. Because of the existence of the back door of the operating system, a hacker can use rootkit to attack and invade other people's computers and thus he can capture passwords and message traffic to and from these computers easily. With the development of the rootkit technology, its applications are more and more extensive and it becomes increasingly difficult to detect it. In addition, for various reasons such as trade secrets, being difficult to be developed, and so on, the rootkit detection technology information and effective tools are still relatively scarce. In this paper, based on the in-depth analysis of the rootkit detection technology, a new kind of the rootkit detection structure is designed and a new method (software), X-Anti, is proposed. Test results show that software designed based on structure proposed is much more efficient than any other rootkit detection software.
Geometric methods for optimal sensor design.
Belabbas, M-A
2016-01-01
The Kalman-Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design.
Geometric methods for optimal sensor design
Belabbas, M.-A.
2016-01-01
The Kalman–Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design. PMID:26997885
Adjoint methods for aerodynamic wing design
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1993-01-01
A model inverse design problem is used to investigate the effect of flow discontinuities on the optimization process. The optimization involves finding the cross-sectional area distribution of a duct that produces velocities that closely match a targeted velocity distribution. Quasi-one-dimensional flow theory is used, and the target is chosen to have a shock wave in its distribution. The objective function which quantifies the difference between the targeted and calculated velocity distributions may become non-smooth due to the interaction between the shock and the discretization of the flowfield. This paper offers two techniques to resolve the resulting problems for the optimization algorithms. The first, shock-fitting, involves careful integration of the objective function through the shock wave. The second, coordinate straining with shock penalty, uses a coordinate transformation to align the calculated shock with the target and then adds a penalty proportional to the square of the distance between the shocks. The techniques are tested using several popular sensitivity and optimization methods, including finite-differences, and direct and adjoint discrete sensitivity methods. Two optimization strategies, Gauss-Newton and sequential quadratic programming (SQP), are used to drive the objective function to a minimum.
Educating Instructional Designers: Different Methods for Different Outcomes.
ERIC Educational Resources Information Center
Rowland, Gordon; And Others
1994-01-01
Suggests new methods of teaching instructional design based on literature reviews of other design fields including engineering, architecture, interior design, media design, and medicine. Methods discussed include public presentations, visiting experts, competitions, artifacts, case studies, design studios, and internships and apprenticeships.…
Game Methodology for Design Methods and Tools Selection
ERIC Educational Resources Information Center
Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois
2014-01-01
Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…
Translating Vision into Design: A Method for Conceptual Design Development
NASA Technical Reports Server (NTRS)
Carpenter, Joyce E.
2003-01-01
One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.
Using Software Design Methods in CALL
ERIC Educational Resources Information Center
Ward, Monica
2006-01-01
The phrase "software design" is not one that arouses the interest of many CALL practitioners, particularly those from a humanities background. However, software design essentials are simply logical ways of going about designing a system. The fundamentals include modularity, anticipation of change, generality and an incremental approach. While CALL…
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
An Efficient Inverse Aerodynamic Design Method For Subsonic Flows
NASA Technical Reports Server (NTRS)
Milholen, William E., II
2000-01-01
Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.
Design optimization method for Francis turbine
NASA Astrophysics Data System (ADS)
Kawajiri, H.; Enomoto, Y.; Kurosawa, S.
2014-03-01
This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.
Alternative methods for the design of jet engine control systems
NASA Technical Reports Server (NTRS)
Sain, M. K.; Leake, R. J.; Basso, R.; Gejji, R.; Maloney, A.; Seshadri, V.
1976-01-01
Various alternatives to linear quadratic design methods for jet engine control systems are discussed. The main alternatives are classified into two broad categories: nonlinear global mathematical programming methods and linear local multivariable frequency domain methods. Specific studies within these categories include model reduction, the eigenvalue locus method, the inverse Nyquist method, polynomial design, dynamic programming, and conjugate gradient approaches.
Alternative methods for the design of jet engine control systems
NASA Technical Reports Server (NTRS)
Sain, M. K.; Leake, R. J.; Basso, R.; Gejji, R.; Maloney, A.; Seshadri, V.
1976-01-01
Various alternatives to linear quadratic design methods for jet engine control systems are discussed. The main alternatives are classified into two broad categories: nonlinear global mathematical programming methods and linear local multivariable frequency domain methods. Specific studies within these categories include model reduction, the eigenvalue locus method, the inverse Nyquist method, polynomial design, dynamic programming, and conjugate gradient approaches.
Demystifying Mixed Methods Research Design: A Review of the Literature
ERIC Educational Resources Information Center
Caruth, Gail D.
2013-01-01
Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…
Computational Methods Applied to Rational Drug Design.
Ramírez, David
2016-01-01
Due to the synergic relationship between medical chemistry, bioinformatics and molecular simulation, the development of new accurate computational tools for small molecules drug design has been rising over the last years. The main result is the increased number of publications where computational techniques such as molecular docking, de novo design as well as virtual screening have been used to estimate the binding mode, site and energy of novel small molecules. In this work I review some tools, which enable the study of biological systems at the atomistic level, providing relevant information and thereby, enhancing the process of rational drug design.
Computational Methods Applied to Rational Drug Design
Ramírez, David
2016-01-01
Due to the synergic relationship between medical chemistry, bioinformatics and molecular simulation, the development of new accurate computational tools for small molecules drug design has been rising over the last years. The main result is the increased number of publications where computational techniques such as molecular docking, de novo design as well as virtual screening have been used to estimate the binding mode, site and energy of novel small molecules. In this work I review some tools, which enable the study of biological systems at the atomistic level, providing relevant information and thereby, enhancing the process of rational drug design. PMID:27708723
The Work Design Method for Human Friendly
NASA Astrophysics Data System (ADS)
Harada, Narumi; Sasaki, Masatoshi; Ichikawa, Masami
In order to realize “the product life cycle with respect for human nature". we ought to make work design so that work environment should be configured to be sound in mind and body, with due consideration of not only physical but also mental factors from the viewpoint of workers. The former includes too heavy work, unreasonable working posture, local fatigue of the body, the safety, and working comfort, and the latter includes work motivation, work worthiness, stress, etc. For the purpose of evaluating the degree of working comfort and safety at human-oriented production lines, we acknowledged, for the work design, the effectiveness of the work designing technique with working time variation duly considered. And, we formulated a model for a mental factor experienced by workers from the degree of working delays. This study covers a work design technique we developed with the effect of the factor as the value of evaluation.
NASA Astrophysics Data System (ADS)
Murphy, G. C.; Dieckmann, M. E.; Bret, A.; Drury, L. O'c.
2010-12-01
Context. The prompt emissions of gamma-ray bursts (GRBs) are seeded by radiating ultrarelativistic electrons. Kinetic energy dominated internal shocks propagating through a jet launched by a stellar implosion, are expected to dually amplify the magnetic field and accelerate electrons. Aims: We explore the effects of density asymmetry and of a quasi-parallel magnetic field on the collision of two plasma clouds. Methods: A two-dimensional relativistic particle-in-cell (PIC) simulation models the collision with 0.9c of two plasma clouds, in the presence of a quasi-parallel magnetic field. The cloud density ratio is 10. The densities of ions and electrons and the temperature of 131 keV are equal in each cloud, and the mass ratio is 250. The peak Lorentz factor of the electrons is determined, along with the orientation and the strength of the magnetic field at the cloud collision boundary. Results: The magnetic field component orthogonal to the initial plasma flow direction is amplified to values that exceed those expected from the shock compression by over an order of magnitude. The forming shock is quasi-perpendicular due to this amplification, caused by a current sheet which develops in response to the differing deflection of the upstream electrons and ions incident on the magnetised shock transition layer. The electron deflection implies a charge separation of the upstream electrons and ions; the resulting electric field drags the electrons through the magnetic field, whereupon they acquire a relativistic mass comparable to that of the ions. We demonstrate how a magnetic field structure resembling the cross section of a flux tube grows self-consistently in the current sheet of the shock transition layer. Plasma filamentation develops behind the shock front, as well as signatures of orthogonal magnetic field striping, indicative of the filamentation instability. These magnetic fields convect away from the shock boundary and their energy density exceeds by far the
A Method of Integrated Description of Design Information for Reusability
NASA Astrophysics Data System (ADS)
Tsumaya, Akira; Nagae, Masao; Wakamatsu, Hidefumi; Shirase, Keiichi; Arai, Eiji
Much of product design is executed concurrently these days. For such concurrent design, the method which can share and ueuse varioud kind of design information among designers is needed. However, complete understanding of the design information among designers have been a difficult issue. In this paper, design process model with use of designers’ intention is proposed. A method to combine the design process information and the design object information is also proposed. We introduce how to describe designers’ intention by providing some databases. Keyword Database consists of ontological data related to design object/activities. Designers select suitable keyword(s) from Keyword Database and explain the reason/ideas for their design activities by the description with use of keyword(s). We also developed the integration design information management system architecture by using a method of integrated description with designers’ intension. This system realizes connections between the information related to design process and that related to design object through designers’ intention. Designers can communicate with each other to understand how others make decision in design through that. Designers also can re-use both design process information data and design object information data through detabase management sub-system.
Supersonic biplane design via adjoint method
NASA Astrophysics Data System (ADS)
Hu, Rui
In developing the next generation supersonic transport airplane, two major challenges must be resolved. The fuel efficiency must be significantly improved, and the sonic boom propagating to the ground must be dramatically reduced. Both of these objectives can be achieved by reducing the shockwaves formed in supersonic flight. The Busemann biplane is famous for using favorable shockwave interaction to achieve nearly shock-free supersonic flight at its design Mach number. Its performance at off-design Mach numbers, however, can be very poor. This dissertation studies the performance of supersonic biplane airfoils at design and off-design conditions. The choked flow and flow-hysteresis phenomena of these biplanes are studied. These effects are due to finite thickness of the airfoils and non-uniqueness of the solution to the Euler equations, creating over an order of magnitude more wave drag than that predicted by supersonic thin airfoil theory. As a result, the off-design performance is the major barrier to the practical use of supersonic biplanes. The main contribution of this work is to drastically improve the off-design performance of supersonic biplanes by using an adjoint based aerodynamic optimization technique. The Busemann biplane is used as the baseline design, and its shape is altered to achieve optimal wave drags in series of Mach numbers ranging from 1.1 to 1.7, during both acceleration and deceleration conditions. The optimized biplane airfoils dramatically reduces the effects of the choked flow and flow-hysteresis phenomena, while maintaining a certain degree of favorable shockwave interaction effects at the design Mach number. Compared to a diamond shaped single airfoil of the same total thickness, the wave drag of our optimized biplane is lower at almost all Mach numbers, and is significantly lower at the design Mach number. In addition, by performing a Navier-Stokes solution for the optimized airfoil, it is verified that the optimized biplane improves
JASMINE design and method of data reduction
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito
2008-07-01
Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.
Designing a mixed methods study in pediatric oncology nursing research.
Wilkins, Krista; Woodgate, Roberta
2008-01-01
Despite the appeal of discovering the different strengths of various research methods, mixed methods research remains elusive in pediatric oncology nursing research. If pediatric oncology nurses are to succeed in mixing quantitative and qualitative methods, they need practical guidelines for managing the complex data and analyses of mixed methods research. This article discusses mixed methods terminology, designs, and key design features. Specific areas addressed include the myths about mixed methods research, types of mixed method research designs, steps involved in developing a mixed method research study, and the benefits and challenges of using mixed methods designs in pediatric oncology research. Examples of recent research studies that have combined quantitative and qualitative research methods are provided. The term mixed methods research is used throughout this article to reflect the use of both quantitative and qualitative methods within one study rather than the use of these methods in separate studies concerning the same research problem.
A method for nonlinear optimization with discrete design variables
NASA Technical Reports Server (NTRS)
Olsen, Gregory R.; Vanderplaats, Garret N.
1987-01-01
A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems are solved using discrete design variables. The method uses approximation techniques to create subproblems suitable for linear mixed-integer programming methods. The method employs existing software for continuous optimization and integer programming.
A method for nonlinear optimization with discrete design variables
NASA Technical Reports Server (NTRS)
Olsen, Gregory R.; Vanderplaats, Garret N.
1987-01-01
A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems are solved using discrete design variables. The method uses approximation techniques to create subproblems suitable for linear mixed-integer programming methods. The method employs existing software for continuous optimization and integer programming.
Lithography aware overlay metrology target design method
NASA Astrophysics Data System (ADS)
Lee, Myungjun; Smith, Mark D.; Lee, Joonseuk; Jung, Mirim; Lee, Honggoo; Kim, Youngsik; Han, Sangjun; Adel, Michael E.; Lee, Kangsan; Lee, Dohwa; Choi, Dongsub; Liu, Zephyr; Itzkovich, Tal; Levinski, Vladimir; Levy, Ady
2016-03-01
We present a metrology target design (MTD) framework based on co-optimizing lithography and metrology performance. The overlay metrology performance is strongly related to the target design and optimizing the target under different process variations in a high NA optical lithography tool and measurement conditions in a metrology tool becomes critical for sub-20nm nodes. The lithography performance can be quantified by device matching and printability metrics, while accuracy and precision metrics are used to quantify the metrology performance. Based on using these metrics, we demonstrate how the optimized target can improve target printability while maintaining the good metrology performance for rotated dipole illumination used for printing a sub-100nm diagonal feature in a memory active layer. The remaining challenges and the existing tradeoff between metrology and lithography performance are explored with the metrology target designer's perspective. The proposed target design framework is completely general and can be used to optimize targets for different lithography conditions. The results from our analysis are both physically sensible and in good agreement with experimental results.
Participatory design methods in telemedicine research.
Clemensen, Jane; Rothmann, Mette J; Smith, Anthony C; Caffery, Liam J; Danbjorg, Dorthe B
2016-01-01
Healthcare systems require a paradigm shift in the way healthcare services are delivered to counteract demographic changes in patient populations, expanding technological developments and the increasing complexity of healthcare. Participatory design (PD) is a methodology that promotes the participation of users in the design process of potential telehealth applications. A PD project can be divided into four phases including: the identification and analysis of participant needs; the generation of ideas and development of prototypes; testing and further development of prototypes; and evaluation. PD is an iterative process where each phase is planned by reflecting on the results from the previous phase with respect to the participants' contribution. Key activities of a PD project include: fieldwork; literature reviewing; and development and testing. All activities must be applied with a participatory mindset that will ensure genuine participation throughout the project. Challenges associated with the use of PD include: the time required to properly engage with participants; language and culture barriers amongst participants; the selection of participants to ensure good representation of the user group; and empowerment. PD is an important process, which is complemented by other evaluation strategies that assess organisational requirements, clinical safety, and clinical and cost effectiveness. PD is a methodology which encourages genuine involvement, where participants have an opportunity to identify practical problems and to design and test technology. The process engages participants in storytelling, future planning and design. PD is a multifaceted assessment tool that helps explore more accurately clinical requirements and patient perspectives in telehealth.
The application of mixed methods designs to trauma research.
Creswell, John W; Zhang, Wanqing
2009-12-01
Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.
Methods for library-scale computational protein design.
Johnson, Lucas B; Huber, Thaddaus R; Snow, Christopher D
2014-01-01
Faced with a protein engineering challenge, a contemporary researcher can choose from myriad design strategies. Library-scale computational protein design (LCPD) is a hybrid method suitable for the engineering of improved protein variants with diverse sequences. This chapter discusses the background and merits of several practical LCPD techniques. First, LCPD methods suitable for delocalized protein design are presented in the context of example design calculations for cellobiohydrolase II. Second, localized design methods are discussed in the context of an example design calculation intended to shift the substrate specificity of a ketol-acid reductoisomerase Rossmann domain from NADPH to NADH.
Probabilistic Methods for Structural Design and Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)
2002-01-01
This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.
A comparison of digital flight control design methods
NASA Technical Reports Server (NTRS)
Powell, J. D.; Parsons, E.; Tashker, M. G.
1976-01-01
Many variations in design methods for aircraft digital flight control have been proposed in the literature. In general, the methods fall into two categories: those where the design is done in the continuous domain (or s-plane), and those where the design is done in the discrete domain (or z-plane). This paper evaluates several variations of each category and compares them for various flight control modes of the Langley TCV Boeing 737 aircraft. Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the 'uncompensated s-plane design' method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.
Improved hybrid SMS-DSF method of nonimaging optical design
NASA Astrophysics Data System (ADS)
Bortz, John; Shatz, Narkis
2011-10-01
The hybrid SMS-DSF method of nonimaging optical design combines the discrete simultaneous multiple surface (SMS) method with the dual-surface functional (DSF) method to obtain improved optical performance relative to the discrete SMS method alone. In this contribution we present a new extension of the hybrid SMS-DSF method that uses differential ray tracing to produce designs having significantly improved performance relative to the original hybrid SMS-DSF method.
Computational Methods for Design, Control and Optimization
2007-10-01
34scenario" that applies to channel flows ( Poiseuille flows , Couette flow ) and pipe flows . Over the past 75 years many complex "transition theories" have... Simulation of Turbulent Flows , Springer Verlag, 2005. Additional Publications Supported by this Grant 1. J. Borggaard and T. Iliescu, Approximate Deconvolution...rigorous analysis of design algorithms that combine numerical simulation codes, approximate sensitivity calculations and optimization codes. The fundamental
Soft computing methods in design of superalloys
NASA Technical Reports Server (NTRS)
Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.
1995-01-01
Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modeled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.
Soft Computing Methods in Design of Superalloys
NASA Technical Reports Server (NTRS)
Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.
1996-01-01
Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.
A comparison of methods currently used in inclusive design.
Goodman-Deane, Joy; Ward, James; Hosking, Ian; Clarkson, P John
2014-07-01
Inclusive design has unique challenges because it aims to improve usability for a wide range of users. This typically includes people with lower levels of ability, as well as mainstream users. This paper examines the effectiveness of two methods that are used in inclusive design: user trials and exclusion calculations (an inclusive design inspection method). A study examined three autoinjectors using both methods (n=30 for the user trials). The usability issues identified by each method are compared and the effectiveness of the methods is discussed. The study found that each method identified different kinds of issues, all of which are important for inclusive design. We therefore conclude that a combination of methods should be used in inclusive design rather than relying on a single method. Recommendations are also given for how the individual methods can be used more effectively in this context.
Waterflooding injectate design systems and methods
Brady, Patrick V.; Krumhansl, James L.
2016-12-13
A method of recovering a liquid hydrocarbon using an injectate includes recovering the liquid hydrocarbon through primary extraction. Physico-chemical data representative of electrostatic interactions between the liquid hydrocarbon and the reservoir rock are measured. At least one additive of the injectate is selected based on the physico-chemical data. The method includes recovering the liquid hydrocarbon from the reservoir rock through secondary extraction using the injectate.
An overview of very high level software design methods
NASA Technical Reports Server (NTRS)
Asdjodi, Maryam; Hooper, James W.
1988-01-01
Very High Level design methods emphasize automatic transfer of requirements to formal design specifications, and/or may concentrate on automatic transformation of formal design specifications that include some semantic information of the system into machine executable form. Very high level design methods range from general domain independent methods to approaches implementable for specific applications or domains. Applying AI techniques, abstract programming methods, domain heuristics, software engineering tools, library-based programming and other methods different approaches for higher level software design are being developed. Though one finds that a given approach does not always fall exactly in any specific class, this paper provides a classification for very high level design methods including examples for each class. These methods are analyzed and compared based on their basic approaches, strengths and feasibility for future expansion toward automatic development of software systems.
Comparison of four nonstationary hydrologic design methods for changing environment
NASA Astrophysics Data System (ADS)
Yan, Lei; Xiong, Lihua; Guo, Shenglian; Xu, Chong-Yu; Xia, Jun; Du, Tao
2017-08-01
The hydrologic design of nonstationary flood extremes is an emerging field that is essential for water resources management and hydrologic engineering design to cope with changing environment. This paper aims to investigate and compare the capability of four nonstationary hydrologic design strategies, including the expected number of exceedances (ENE), design life level (DLL), equivalent reliability (ER), and average design life level (ADLL), with the last three methods taking into consideration the design life of the project. The confidence intervals of the calculated design floods were also estimated using the nonstationary bootstrap approach. A comparison of these four methods was performed using the annual maximum flood series (AMFS) of the Weihe River basin, Jinghe River basin, and Assunpink Creek basin. The results indicated that ENE, ER and ADLL yielded the same or very similar design values and confidence intervals for both increasing and decreasing trends of AMFS considered. DLL also yields similar design values if the relationship between DLL and ER/ADLL return periods is considered. Both ER and ADLL are recommended for practical use as they have associated design floods with the design life period of projects and yield reasonable design quantiles and confidence intervals. Furthermore, by assuming that the design results using either a stationary or nonstationary hydrologic design strategy should have the same reliability, the ER method enables us to solve the nonstationary hydrologic design problems by adopting the stationary design reliability, thus bridging the gap between stationary and nonstationary design criteria.
The Triton: Design concepts and methods
NASA Technical Reports Server (NTRS)
Meholic, Greg; Singer, Michael; Vanryn, Percy; Brown, Rhonda; Tella, Gustavo; Harvey, Bob
1992-01-01
During the design of the C & P Aerospace Triton, a few problems were encountered that necessitated changes in the configuration. After the initial concept phase, the aspect ratio was increased from 7 to 7.6 to produce a greater lift to drag ratio (L/D = 13) which satisfied the horsepower requirements (118 hp using the Lycoming O-235 engine). The initial concept had a wing planform area of 134 sq. ft. Detailed wing sizing analysis enlarged the planform area to 150 sq. ft., without changing its layout or location. The most significant changes, however, were made just prior to inboard profile design. The fuselage external diameter was reduced from 54 to 50 inches to reduce drag to meet the desired cruise speed of 120 knots. Also, the nose was extended 6 inches to accommodate landing gear placement. Without the extension, the nosewheel received an unacceptable percentage (25 percent) of the landing weight. The final change in the configuration was made in accordance with the stability and control analysis. In order to reduce the static margin from 20 to 13 percent, the horizontal tail area was reduced from 32.02 to 25.0 sq. ft. The Triton meets all the specifications set forth in the design criteria. If time permitted another iteration of the calculations, two significant changes would be made. The vertical stabilizer area would be reduced to decrease the aircraft lateral stability slope since the current value was too high in relation to the directional stability slope. Also, the aileron size would be decreased to reduce the roll rate below the current 106 deg/second. Doing so would allow greater flap area (increasing CL(sub max)) and thus reduce the overall wing area. C & P would also recalculate the horsepower and drag values to further validate the 120 knot cruising speed.
Equipartition gamma-ray blazars and the location of the gamma-ray emission site in 3C 279
Dermer, Charles D.; Cerruti, Matteo; Lott, Benoit
2014-02-20
Blazar spectral models generally have numerous unconstrained parameters, leading to ambiguous values for physical properties like Doppler factor δ{sub D} or fluid magnetic field B'. To help remedy this problem, a few modifications of the standard leptonic blazar jet scenario are considered. First, a log-parabola function for the electron distribution is used. Second, analytic expressions relating energy loss and kinematics to blazar luminosity and variability, written in terms of equipartition parameters, imply δ{sub D}, B', and the peak electron Lorentz factor γ{sub pk}{sup ′}. The external radiation field in a blazar is approximated by Lyα radiation from the broad-line region (BLR) and ≈0.1 eV infrared radiation from a dusty torus. When used to model 3C 279 spectral energy distributions from 2008 and 2009 reported by Hayashida et al., we derive δ{sub D} ∼ 20-30, B' ∼ few G, and total (IR + BLR) external radiation field energy densities u ∼ 10{sup –2}-10{sup –3} erg cm{sup –3}, implying an origin of the γ-ray emission site in 3C 279 at the outer edges of the BLR. This is consistent with the γ-ray emission site being located at a distance R ≲ Γ{sup 2} ct {sub var} ∼ 0.1(Γ/30){sup 2}(t {sub var}/10{sup 4} s) pc from the black hole powering 3C 279's jets, where t {sub var} is the variability timescale of the radiation in the source frame, and at farther distances for narrow-jet and magnetic-reconnection models. Excess ≳ 5 GeV γ-ray emission observed with Fermi LAT from 3C 279 challenges the model, opening the possibility of a second leptonic component or a hadronic origin of the emission. For low hadronic content, absolute jet powers of ≈10% of the Eddington luminosity are calculated.
Research and Methods for Simulation Design: State of the Art
1990-09-01
designers. Designers may use this review to identify methods to aid the training-device design process and individuals who manage research programs...maximum training effectiveness at a given cost. The methods should apply to the concept-formulation phase’of the training-device development process ...design process . Finally, individuals who manage research programs may use this information to set priorities for future research efforts. viii RESEARCH
How to Construct a Mixed Methods Research Design.
Schoonenboom, Judith; Johnson, R Burke
2017-01-01
This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.
Design Features of Explicit Values Clarification Methods: A Systematic Review.
Witteman, Holly O; Scherer, Laura D; Gavaruzzi, Teresa; Pieterse, Arwen H; Fuhrel-Forbis, Andrea; Chipenda Dansokho, Selma; Exe, Nicole; Kahn, Valerie C; Feldman-Stewart, Deb; Col, Nananda F; Turgeon, Alexis F; Fagerlin, Angela
2016-05-01
Values clarification is a recommended element of patient decision aids. Many different values clarification methods exist, but there is little evidence synthesis available to guide design decisions. To describe practices in the field of explicit values clarification methods according to a taxonomy of design features. MEDLINE, all EBM Reviews, CINAHL, EMBASE, Google Scholar, manual search of reference lists, and expert contacts. Articles were included if they described 1 or more explicit values clarification methods. We extracted data about decisions addressed; use of theories, frameworks, and guidelines; and 12 design features. We identified 110 articles describing 98 explicit values clarification methods. Most of these addressed decisions in cancer or reproductive health, and half addressed a decision between just 2 options. Most used neither theory nor guidelines to structure their design. "Pros and cons" was the most common type of values clarification method. Most methods did not allow users to add their own concerns. Few methods explicitly presented tradeoffs inherent in the decision, supported an iterative process of values exploration, or showed how different options aligned with users' values. Study selection criteria and choice of elements for the taxonomy may have excluded values clarification methods or design features. Explicit values clarification methods have diverse designs but can be systematically cataloged within the structure of a taxonomy. Developers of values clarification methods should carefully consider each of the design features in this taxonomy and publish adequate descriptions of their designs. More research is needed to study the effects of different design features. © The Author(s) 2016.
A survey on methods of design features identification
NASA Astrophysics Data System (ADS)
Grabowik, C.; Kalinowski, K.; Paprocka, I.; Kempa, W.
2015-11-01
It is widely accepted that design features are one of the most attractive integration method of most fields of engineering activities such as a design modelling, process planning or production scheduling. One of the most important tasks which are realized in the integration process of design and planning functions is a design translation meant as design data mapping into data which are important from process planning needs point of view, it is manufacturing data. A design geometrical shape translation process can be realized with application one of the following strategies: (i) designing with previously prepared design features library also known as DBF method it is design by feature, (ii) interactive design features recognition IFR, (iii) automatic design features recognition AFR. In case of the DBF method design geometrical shape is created with design features. There are two basic approaches for design modelling in DBF method it is classic in which a part design is modelled from beginning to end with application design features previously stored in a design features data base and hybrid where part is partially created with standard predefined CAD system tools and the rest with suitable design features. Automatic feature recognition consist in an autonomic searching of a product model represented with a specific design representation method in order to find those model features which might be potentially recognized as design features, manufacturing features, etc. This approach needs the searching algorithm to be prepared. The searching algorithm should allow carrying on the whole recognition process without a user supervision. Currently there are lots of AFR methods. These methods need the product model to be represented with B-Rep representation most often, CSG rarely, wireframe very rarely. In the IFR method potential features are being recognized by a user. This process is most often realized by a user who points out those surfaces which seem to belong to a
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Preliminary design method for deployable spacecraft beams
NASA Technical Reports Server (NTRS)
Mikulas, Martin M., Jr.; Cassapakis, Costas
1995-01-01
There is currently considerable interest in low-cost, lightweight, compactly packageable deployable elements for various future missions involving small spacecraft. These elements must also have a simple and reliable deployment scheme and possess zero or very small free-play. Although most small spacecraft do not experience large disturbances, very low stiffness appendages or free-play can couple with even small disturbances and lead to unacceptably large attitude errors which may involve the introduction of a flexible-body control system. A class of structures referred to as 'rigidized structures' offers significant promise in providing deployable elements that will meet these needs for small spacecraft. The purpose of this paper is to introduce several rigidizable concepts and to develop a design methodology which permits a rational comparison of these elements to be made with alternate concepts.
Method for designing and controlling compliant gripper
NASA Astrophysics Data System (ADS)
Spanu, A. R.; Besnea, D.; Avram, M.; Ciobanu, R.
2016-08-01
The compliant grippers are useful for high accuracy grasping of small objects with adaptive control of contact points along the active surfaces of the fingers. The spatial trajectories of the elements become a must, due to the development of MEMS. The paper presents the solution for the compliant gripper designed by the authors, so the planar and spatial movements are discussed. At the beginning of the process, the gripper could work as passive one just for the moment when it has to reach out the object surface. The forces provided by the elements have to avoid the damage. As part of the system, the camera is taken picture of the object, in order to facilitate the positioning of the system. When the contact is established, the mechanism is acting as an active gripper by using an electrical stepper motor, which has controlled movement.
A flexible layout design method for passive micromixers.
Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui
2012-10-01
This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
NASA Astrophysics Data System (ADS)
Kawai, Toshiyuki; Rinoie, Kenichi
Aircraft conceptual design method currently used for the university design education mainly utilises empirical values based on the statistical database to determine the main design parameters. Therefore, it is often difficult for students to understand the effects of aerodynamic parameters such as a wing aspect ratio and a taper ratio during the design process. In this paper, a conceptual design method that incorporates a boundary element method is discussed so that aerodynamic characteristic estimations are possible and that the students can easily comprehend the effects of aerodynamic parameters while designing the airplane. A single engine light airplane has been designed by the present conceptual design method. The results obtained by the present method and those by the conventional method are compared and discussed.
Conceptual design of clean processes: Tools and methods
Hurme, M.
1996-12-31
Design tools available for implementing clean design into practice are discussed. The application areas together with the methods of comparison of clean process alternatives are presented. Environmental principles are becoming increasingly important in the whole life cycle of products from design, manufacturing and marketing to disposal. The hinder of implementing clean technology in design has been the necessity to apply it in all phases of design starting from the beginning, since it deals with the major selections made in the conceptual process design. Therefore both a modified design approach and new tools are needed for process design to make the application of clean technology practical. The first item; extended process design methodologies has been presented by Hurme, Douglas, Rossiter and Klee, Hilaly and Sikdar. The aim of this paper is to discuss the latter topic; the process design tools which assist in implementing clean principles into process design. 22 refs., 2 tabs.
Analytical techniques for instrument design - matrix methods
Robinson, R.A.
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
HEALTHY study rationale, design and methods
2009-01-01
The HEALTHY primary prevention trial was designed and implemented in response to the growing numbers of children and adolescents being diagnosed with type 2 diabetes. The objective was to moderate risk factors for type 2 diabetes. Modifiable risk factors measured were indicators of adiposity and glycemic dysregulation: body mass index ≥85th percentile, fasting glucose ≥5.55 mmol l-1 (100 mg per 100 ml) and fasting insulin ≥180 pmol l-1 (30 μU ml-1). A series of pilot studies established the feasibility of performing data collection procedures and tested the development of an intervention consisting of four integrated components: (1) changes in the quantity and nutritional quality of food and beverage offerings throughout the total school food environment; (2) physical education class lesson plans and accompanying equipment to increase both participation and number of minutes spent in moderate-to-vigorous physical activity; (3) brief classroom activities and family outreach vehicles to increase knowledge, enhance decision-making skills and support and reinforce youth in accomplishing goals; and (4) communications and social marketing strategies to enhance and promote changes through messages, images, events and activities. Expert study staff provided training, assistance, materials and guidance for school faculty and staff to implement the intervention components. A cohort of students were enrolled in sixth grade and followed to end of eighth grade. They attended a health screening data collection at baseline and end of study that involved measurement of height, weight, blood pressure, waist circumference and a fasting blood draw. Height and weight were also collected at the end of the seventh grade. The study was conducted in 42 middle schools, six at each of seven locations across the country, with 21 schools randomized to receive the intervention and 21 to act as controls (data collection activities only). Middle school was the unit of sample size and
System and method of designing models in a feedback loop
Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.
2017-02-14
A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.
What Can Mixed Methods Designs Offer Professional Development Program Evaluators?
ERIC Educational Resources Information Center
Giordano, Victoria; Nevin, Ann
2007-01-01
In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Designation of noise description methods... TRANSPORTATION (CONTINUED) AIRPORTS NOTICE AND APPROVAL OF AIRPORT NOISE AND ACCESS RESTRICTIONS General Provisions § 161.9 Designation of noise description methods. For purposes of this part, the following...
A design method of divertor in tokamak reactors
NASA Astrophysics Data System (ADS)
Ueda, N.; Itoh, S.-I.; Tanaka, M.; Itoh, K.
1990-08-01
Computational method to design the efficient divertor configuration in tokamak reactor is presented. The two dimensional code was developed to analyze the distributions of the plasma and neutral particles for realistic configurations. Using this code, a method to design the efficient divertor configuration is developed. An example of new divertor, which consists of the baffle and fin plates, is analyzed.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Expanding color design methods for architecture and allied disciplines
NASA Astrophysics Data System (ADS)
Linton, Harold E.
2002-06-01
The color design processes of visual artists, architects, designers, and theoreticians included in this presentation reflect the practical role of color in architecture. What the color design professional brings to the architectural design team is an expertise and rich sensibility made up of a broad awareness and a finely tuned visual perception. This includes a knowledge of design and its history, expertise with industrial color materials and their methods of application, an awareness of design context and cultural identity, a background in physiology and psychology as it relates to human welfare, and an ability to problem-solve and respond creatively to design concepts with innovative ideas. The broadening of the definition of the colorists's role in architectural design provides architects, artists and designers with significant opportunities for continued professional and educational development.
Designing Adaptive Intensive Interventions Using Methods from Engineering
Lagoa, Constantino M.; Bekiroglu, Korkut; Lanza, Stephanie T.; Murphy, Susan A.
2014-01-01
Objective Adaptive intensive interventions are introduced and new methods from the field of control engineering for use in their design are illustrated. Method A detailed step-by-step explanation of how control engineering methods can be used with intensive longitudinal data to design an adaptive intensive intervention is provided. The methods are evaluated via simulation. Results Simulation results illustrate how the designed adaptive intensive intervention can result in improved outcomes with less treatment by providing treatment only when it is needed. Furthermore, the methods are robust to model misspecification as well as the influence of unobserved causes. Conclusions These new methods can be used to design adaptive interventions that are effective yet reduce participant burden. PMID:25244394
A simple inverse design method for pump turbine
NASA Astrophysics Data System (ADS)
Yin, Junlian; Li, Jingjing; Wang, Dezhong; Wei, Xianzhu
2014-03-01
In this paper, a simple inverse design method is proposed for pump turbine. The main point of this method is that the blade loading distribution is first extracted from an existing model and then applied in the new design. As an example, the blade loading distribution of the runner designed with head 200m, was analyzed. And then, the combination of the extracted blade loading and a meridional passage suitable for 500m head is applied to design a new runner project. After CFD and model test, it is shown that the new runner performs very well in terms of efficiency and cavitation. Therefore, as an alternative, the inverse design method can be extended to other design applications.
Design methods for fault-tolerant finite state machines
NASA Technical Reports Server (NTRS)
Niranjan, Shailesh; Frenzel, James F.
1993-01-01
VLSI electronic circuits are increasingly being used in space-borne applications where high levels of radiation may induce faults, known as single event upsets. In this paper we review the classical methods of designing fault tolerant digital systems, with an emphasis on those methods which are particularly suitable for VLSI-implementation of finite state machines. Four methods are presented and will be compared in terms of design complexity, circuit size, and estimated circuit delay.
Review of design optimization methods for turbomachinery aerodynamics
NASA Astrophysics Data System (ADS)
Li, Zhihui; Zheng, Xinqian
2017-08-01
In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.
Inviscid transonic wing design using inverse methods in curvilinear coordinates
NASA Technical Reports Server (NTRS)
Gally, Thomas A.; Carlson, Leland A.
1987-01-01
An inverse wing design method has been developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
System Design Support by Optimization Method Using Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
We proposed the new optimization method based on stochastic process. The characteristics of this method are to obtain the approximate solution of the optimum solution as an expected value. In numerical calculation, a kind of Monte Carlo method is used to obtain the solution because of stochastic process. Then, it can obtain the probability distribution of the design variable because it is generated in the probability that design variables were in proportion to the evaluation function value. This probability distribution shows the influence of design variables on the evaluation function value. This probability distribution is the information which is very useful for the system design. In this paper, it is shown the proposed method is useful for not only the optimization but also the system design. The flight trajectory optimization problem for the hang-glider is shown as an example of the numerical calculation.
Tabu search method with random moves for globally optimal design
NASA Astrophysics Data System (ADS)
Hu, Nanfang
1992-09-01
Optimum engineering design problems are usually formulated as non-convex optimization problems of continuous variables. Because of the absence of convexity structure, they can have multiple minima, and global optimization becomes difficult. Traditional methods of optimization, such as penalty methods, can often be trapped at a local optimum. The tabu search method with random moves to solve approximately these problems is introduced. Its reliability and efficiency are examined with the help of standard test functions. By the analysis of the implementations, it is seen that this method is easy to use, and no derivative information is necessary. It outperforms the random search method and composite genetic algorithm. In particular, it is applied to minimum weight design examples of a three-bar truss, coil springs, a Z-section and a channel section. For the channel section, the optimal design using the tabu search method with random moves saved 26.14 percent over the weight of the SUMT method.
Designing adaptive intensive interventions using methods from engineering.
Lagoa, Constantino M; Bekiroglu, Korkut; Lanza, Stephanie T; Murphy, Susan A
2014-10-01
Adaptive intensive interventions are introduced, and new methods from the field of control engineering for use in their design are illustrated. A detailed step-by-step explanation of how control engineering methods can be used with intensive longitudinal data to design an adaptive intensive intervention is provided. The methods are evaluated via simulation. Simulation results illustrate how the designed adaptive intensive intervention can result in improved outcomes with less treatment by providing treatment only when it is needed. Furthermore, the methods are robust to model misspecification as well as the influence of unobserved causes. These new methods can be used to design adaptive interventions that are effective yet reduce participant burden. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Robust Multivariable Controller Design via Implicit Model-Following Methods.
1983-12-01
HD-Ri38 309 ROBUST MULTIVARIABLE CONTROLLER DESIGN VIA IMPLICIT 1/4 MODEL-FOLLOWING METHODS(U) AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH SCHOOL...aaS. a%. 1 .111 I Q~ 18 0 ROBUST MULTIVARIABLE CONTROLLER DESIGN -~ :VIA IMPLICIT MODEL-FOLLOWING METHODS ’.% THESIS , AFIT/GE/EE/83D-48 William G... CONTROLLER DESIGN VIA IMPLICIT MODEL-FOLLOWING METHODS THESIS AFIT/GE/EE/83D-48 William G. Miller Capt USAF ,. Approved for pubi release; distribution
An inverse method with regularity condition for transonic airfoil design
NASA Technical Reports Server (NTRS)
Zhu, Ziqiang; Xia, Zhixun; Wu, Liyi
1991-01-01
It is known from Lighthill's exact solution of the incompressible inverse problem that in the inverse design problem, the surface pressure distribution and the free stream speed cannot both be prescribed independently. This implies the existence of a constraint on the prescribed pressure distribution. The same constraint exists at compressible speeds. Presented here is an inverse design method for transonic airfoils. In this method, the target pressure distribution contains a free parameter that is adjusted during the computation to satisfy the regularity condition. Some design results are presented in order to demonstrate the capabilities of the method.
Design of freeform unobscured reflective imaging systems using CI method
NASA Astrophysics Data System (ADS)
Yang, Tong; Hou, Wei; Wu, Xiaofei; Jin, Guofan; Zhu, Jun
2016-10-01
In this paper, we demonstrated the design method of freeform unobscured reflective imaging systems using the point-bypoint Construction-Iteration (CI) method. Compared with other point-by-point design methods, the light rays of multiple fields and different pupil coordinates are employed in the design. The whole design process starts from a simple initial system consisting of decentered and tilted planes. In the preliminary surfaces-construction stage, the coordinates as well as the surface normals of the feature data points on each freeform surface can be calculated point-by-point directly based on the given object-image relationships. Then, the freeform surfaces are generated through a novel surface fitting method considering both the coordinates and surface normals of the data points. Next, an iterative process is employed to significantly improve the image quality. In this way, an unobscured design with freeform surfaces can be obtained directly, and it can be taken as a good starting point for further optimization. The benefit and feasibility of this design method is demonstrated by two design examples of high-performance freeform unobscured imaging systems. Both two systems have good imaging performance after final design.
Design Method for EPS Control System Based on KANSEI Structure
NASA Astrophysics Data System (ADS)
Saitoh, Yumi; Itoh, Hideaki; Ozaki, Fuminori; Nakamura, Takenobu; Kawaji, Shigeyasu
Recently, it has been identified that a KANSEI engineering plays an important role in functional design developing for realizing highly sophisticated products. However, in practical development methods, we design products and optimise the design trial and error, which indecates that we depend on the skill set of experts. In this paper, we focus on an automobile electric power steering (EPS) for which a functional design is required. First, the KANSEI structure is determined on the basis of the steering feeling of an experienced driver, and an EPS control design based on this KANSEI structure is proposed. Then, the EPS control parameters are adjusted in accordance with the KANSEI index. Finally, by assessing the experimental results obtained from the driver, the effectiveness of the proposed design method is verified.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
An artificial viscosity method for the design of supercritical airfoils
NASA Technical Reports Server (NTRS)
Mcfadden, G. B.
1979-01-01
A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.
Stabilizing State-Feedback Design via the Moving Horizon Method.
1982-01-01
aide if necessary and identify by block number) Stabilizing control design; linear time varying systems; fixed depth horizon; index optimization methods...dual system. 20. ABSTRACT (Continue an reverse side If necessary and Identify by block number) Li _ A stabilizing control design for general linear...Apprvyed for pb~ ~~* 14 ~dl Stri but ion uni imit Oe, ABSTRACT A stabilizing control design for general linear time vary- invariant systems through
Numerical methods for aerothermodynamic design of hypersonic space transport vehicles
NASA Astrophysics Data System (ADS)
Wanie, K. M.; Brenneis, A.; Eberle, A.; Heiss, S.
1993-04-01
The requirement of the design process of hypersonic vehicles to predict flow past entire configurations with wings, fins, flaps, and propulsion system represents one of the major challenges for aerothermodynamics. In this context computational fluid dynamics has come up as a powerful tool to support the experimental work. A couple of numerical methods developed at MBB designed to fulfill the needs of the design process are described. The governing equations and fundamental details of the solution methods are shortly reviewed. Results are given for both geometrically simple test cases and realistic hypersonic configurations. Since there is still a considerable lack of experience for hypersonic flow calculations an extensive testing and verification is essential. This verification is done by comparison of results with experimental data and other numerical methods. The results presented prove that the methods used are robust, flexible, and accurate enough to fulfill the strong needs of the design process.
Working stress design method for reinforced soil walls
Ehrlich, M. ); Mitchell, J.K. )
1994-04-01
A method for the internal design of reinforced soil walls based on working stresses is developed and evaluated using measurements from five full-scale structures containing a range of reinforcement types. It is shown that, in general, the stiffer the reinforcement system and the higher the stresses induced during compaction, the higher are the tensile stresses that must be resisted by the reinforcements. Unique features of this method, compared to currently used reinforced soil wall design methods, are that it can be applied to all types of reinforcement systems, reinforcement and soil stiffness properties are considered, and backfill compaction stresses are taken explicitly into account. The method can be applied either analytically or using design charts. A design example is included.
Two-Method Planned Missing Designs for Longitudinal Research
ERIC Educational Resources Information Center
Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.
2014-01-01
We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…
New directions for Artificial Intelligence (AI) methods in optimum design
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1989-01-01
Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.
Two-Method Planned Missing Designs for Longitudinal Research
ERIC Educational Resources Information Center
Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.
2014-01-01
We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…
Investigating the Use of Design Methods by Capstone Design Students at Clemson University
ERIC Educational Resources Information Center
Miller, W. Stuart; Summers, Joshua D.
2013-01-01
The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…
Approximate method of designing a two-element airfoil
NASA Astrophysics Data System (ADS)
Abzalilov, D. F.; Mardanov, R. F.
2011-09-01
An approximate method is proposed for designing a two-element airfoil. The method is based on reducing an inverse boundary-value problem in a doubly connected domain to a problem in a singly connected domain located on a multisheet Riemann surface. The essence of the method is replacement of channels between the airfoil elements by channels of flow suction and blowing. The shape of these channels asymptotically tends to the annular shape of channels passing to infinity on the second sheet of the Riemann surface. The proposed method can be extended to designing multielement airfoils.
New knowledge network evaluation method for design rationale management
NASA Astrophysics Data System (ADS)
Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao
2015-01-01
Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.
Design method for four-reflector type beam waveguide systems
NASA Technical Reports Server (NTRS)
Betsudan, S.; Katagi, T.; Urasaki, S.
1986-01-01
Discussed is a method for the design of four reflector type beam waveguide feed systems, comprised of a conical horn and 4 focused reflectors, which are used widely as the primary reflector systems for communications satellite Earth station antennas. The design parameters for these systems are clarified, the relations between each parameter are brought out based on the beam mode development, and the independent design parameters are specified. The characteristics of these systems, namely spillover loss, crosspolarization components, and frequency characteristics, and their relation to the design parameters, are also shown. It is also indicated that design parameters which decide the dimensions of the conical horn or the shape of the focused reflectors can be unerringly established once the design standard for the system has been selected as either: (1) minimizing the crosspolarization component by keeping the spillover loss to within acceptable limits, or (2) minimizing the spillover loss by maintaining the crossover components below an acceptable level and the independent design parameters, such as the respective sizes of the focused reflectors and the distances between the focussed reflectors, etc., have been established according to mechanical restrictions. A sample design is also shown. In addition to being able to clarify the effects of each of the design parameters on the system and improving insight into these systems, the efficiency of these systems will also be increased with this design method.
Epidemiological designs for vaccine safety assessment: methods and pitfalls.
Andrews, Nick
2012-09-01
Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues. Copyright © 2011 The International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
A comparison of methods for DPLL loop filter design
NASA Technical Reports Server (NTRS)
Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.
1986-01-01
Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.
A comparison of methods for DPLL loop filter design
NASA Astrophysics Data System (ADS)
Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.
1986-11-01
Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.
NASA Astrophysics Data System (ADS)
Yan, Dahai; Zhang, Li; Zhang, Shuang-Nan
2015-12-01
Precise spectra of 3C 279 in the 0.5-70 keV range, obtained during two epochs of Swift and NuSTAR observations, are analysed using a near-equipartition model. We apply a one-zone leptonic model with a three-parameter log-parabola electron energy distribution to fit the Swift and NuSTAR X-ray data, as well as simultaneous optical and Fermi-LAT gamma-ray data. The Markov chain Monte Carlo technique is used to search the high-dimensional parameter space and evaluate the uncertainties on model parameters. We show that the two spectra can be successfully fitted in near-equipartition conditions, defined by the ratio of the energy density of relativistic electrons to magnetic field ζe being close to unity. In both spectra, the observed X-rays are dominated by synchrotron self-Compton photons, and the observed gamma-rays are dominated by Compton scattering of external infrared photons from a surrounding dusty torus. Model parameters are well constrained. From the low state to the high state, both the curvature of the log-parabola width parameter and the synchrotron peak frequency significantly increase. The derived magnetic fields in the two states are nearly identical (˜1 G), but the Doppler factor in the high state is larger than that in the low state (˜28 versus ˜18). We derive that the gamma-ray emission site takes place outside the broad-line region, at ≳0.1 pc from the black hole, but within the dusty torus. Implications for 3C 279 as a source of high-energy cosmic rays are discussed.
Novel parameter-based flexure bearing design method
NASA Astrophysics Data System (ADS)
Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David
2016-06-01
A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.
XML-based product information processing method for product design
NASA Astrophysics Data System (ADS)
Zhang, Zhen Yu
2012-01-01
Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.
XML-based product information processing method for product design
NASA Astrophysics Data System (ADS)
Zhang, Zhen Yu
2011-12-01
Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.
On design methods for bolted joints in composite aircraft structures
NASA Astrophysics Data System (ADS)
Ireman, Tomas; Nyman, Tonny; Hellbom, Kurt
The problems related to the determination of the load distribution in a multirow fastener joint using the finite element method are discussed. Both simple and more advanced design methods used at Saab Military Aircraft are presented. The stress distributions obtained with an analytically based method and an FE-based method are compared. Results from failure predictions with a simple analytically based method and the more advanced FE-based method of multi-fastener tension and shear loaded test specimens are compared with experiments. Finally, complicating factors such as three-dimensional effects caused by secondary bending and fastener bending are discussed and suggestions for future research are given.
A Bright Future for Evolutionary Methods in Drug Design.
Le, Tu C; Winkler, David A
2015-08-01
Most medicinal chemists understand that chemical space is extremely large, essentially infinite. Although high-throughput experimental methods allow exploration of drug-like space more rapidly, they are still insufficient to fully exploit the opportunities that such large chemical space offers. Evolutionary methods can synergistically blend automated synthesis and characterization methods with computational design to identify promising regions of chemical space more efficiently. We describe how evolutionary methods are implemented, and provide examples of published drug development research in which these methods have generated molecules with increased efficacy. We anticipate that evolutionary methods will play an important role in future drug discovery.
Design of diffractive optical surfaces within the nonimaging SMS design method
NASA Astrophysics Data System (ADS)
Mendes-Lopes, João.; Benítez, Pablo; Miñano, Juan C.
2015-09-01
The Simultaneous Multiple Surface (SMS) method was initially developed as a design method in Nonimaging Optics and later, the method was extended for designing Imaging Optics. We show an extension of the SMS method to diffractive surfaces. Using this method, diffractive kinoform surfaces are calculated simultaneously and through a direct method, i. e. it is not based in multi-parametric optimization techniques. Using the phase-shift properties of diffractive surfaces as an extra degree of freedom, only N/2 surfaces are needed to perfectly couple N one parameter wavefronts. Wavefronts of different wavelengths can also be coupled, hence chromatic aberration can be corrected in SMS-based systems. This method can be used by combining and calculating simultaneously both reflective, refractive and diffractive surfaces, through direct calculation of phase and refractive/reflective profiles. Representative diffractive systems designed by the SMS method are presented.
The Design with Intent Method: a design tool for influencing user behaviour.
Lockton, Dan; Harrison, David; Stanton, Neville A
2010-05-01
Using product and system design to influence user behaviour offers potential for improving performance and reducing user error, yet little guidance is available at the concept generation stage for design teams briefed with influencing user behaviour. This article presents the Design with Intent Method, an innovation tool for designers working in this area, illustrated via application to an everyday human-technology interaction problem: reducing the likelihood of a customer leaving his or her card in an automatic teller machine. The example application results in a range of feasible design concepts which are comparable to existing developments in ATM design, demonstrating that the method has potential for development and application as part of a user-centred design process.
INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN
The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...
Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses
ERIC Educational Resources Information Center
Christ, Thomas W.
2009-01-01
Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…
INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN
The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...
Advances in multiparameter optimization methods for de novo drug design.
Segall, Matthew
2014-07-01
A high-quality drug must achieve a balance of physicochemical and absorption, distribution, metabolism and elimination properties, safety and potency against its therapeutic target(s). Multiparameter optimization (MPO) methods guide the simultaneous optimization of multiple factors to quickly target compounds with the highest chance of downstream success. MPO can be combined with 'de novo design' methods to automatically generate and assess a large number of diverse structures and identify strategies to optimize a compound's overall balance of properties. The article provides a review of MPO methods and recent developments in the methods and opinions in the field. It also provides a description of advances in de novo design that improve the relevance of automatically generated compound structures and integrate MPO. Finally, the article provides discussion of a recent case study of the automatic design of ligands to polypharmacological profiles. Recent developments have reduced the generation of chemically infeasible structures and improved the quality of compounds generated by de novo design methods. There are concerns about the ability of simple drug-like properties and ligand efficiency indices to effectively guide the detailed optimization of compounds. De novo design methods cannot identify a perfect compound for synthesis, but it can identify high-quality ideas for detailed consideration by an expert scientist.
Test methods and design allowables for fibrous composites. Volume 2
NASA Technical Reports Server (NTRS)
Chamis, Christos C. (Editor)
1989-01-01
Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.
Test methods and design allowables for fibrous composites. Volume 2
NASA Technical Reports Server (NTRS)
Chamis, Christos C. (Editor)
1989-01-01
Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Tradeoff methods in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1984-01-01
The latest results of an ongoing study of computer-aided design of airplane control systems are given. Constrained minimization algorithms are used, with the design objectives in the constraint vector. The concept of Pareto optimiality is briefly reviewed. It is shown how an experienced designer can use it to find designs which are well-balanced in all objectives. Then the problem of finding designs which are insensitive to uncertainty in system parameters are discussed, introducing a probabilistic vector definition of sensitivity which is consistent with the deterministic Pareto optimal problem. Insensitivity is important in any practical design, but it is particularly important in the design of feedback control systems, since it is considered to be the most important distinctive property of feedback control. Methods of tradeoff between deterministic and stochastic-insensitive (SI) design are described, and tradeoff design results are presented for the example of the a Shuttle lateral stability augmentation system. This example is used because careful studies have been made of the uncertainty in Shuttle aerodynamics. Finally, since accurate statistics of uncertain parameters are usually not available, the effects of crude statistical models on SI designs are examined.
Comparison of Optimal Design Methods in Inverse Problems.
Banks, H T; Holm, Kathleen; Kappel, Franz
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29].
Ng, Annie W Y; Siu, Kin Wai Michael; Chan, Chetwyn C H
2013-01-01
This study investigated the practices and attitudes of novice designers toward user involvement in public symbol design at the conceptual design stage, i.e. the stereotype production method. Differences between male and female novice designers were examined. Forty-eight novice designers (24 male, 24 female) were asked to design public symbol referents based on suggestions made by a group of users in a previous study and provide feedback with regard to the design process. The novice designers were receptive to the adoption of user suggestions in the conception of the design, but tended to modify the pictorial representations generated by the users to varying extents. It is also significant that the male and female novice designers appeared to emphasize different aspects of user suggestions, and the female novice designers were more positive toward these suggestions than their male counterparts. The findings should aid the optimization of the stereotype production method for user-involved symbol design. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Computer method for design of acoustic liners for turbofan engines
NASA Technical Reports Server (NTRS)
Minner, G. L.; Rice, E. J.
1976-01-01
A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.
NASA Astrophysics Data System (ADS)
Hanan, Lu; Qiushi, Li; Shaobin, Li
2016-12-01
This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.
Developing Conceptual Hypersonic Airbreathing Engines Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Ferlemann, Shelly M.; Robinson, Jeffrey S.; Martin, John G.; Leonard, Charles P.; Taylor, Lawrence W.; Kamhawi, Hilmi
2000-01-01
Designing a hypersonic vehicle is a complicated process due to the multi-disciplinary synergy that is required. The greatest challenge involves propulsion-airframe integration. In the past, a two-dimensional flowpath was generated based on the engine performance required for a proposed mission. A three-dimensional CAD geometry was produced from the two-dimensional flowpath for aerodynamic analysis, structural design, and packaging. The aerodynamics, engine performance, and mass properties arc inputs to the vehicle performance tool to determine if the mission goals were met. If the mission goals were not met, then a flowpath and vehicle redesign would begin. This design process might have to be performed several times to produce a "closed" vehicle. This paper will describe an attempt to design a hypersonic cruise vehicle propulsion flowpath using a Design of' Experiments method to reduce the resources necessary to produce a conceptual design with fewer iterations of the design cycle. These methods also allow for more flexible mission analysis and incorporation of additional design constraints at any point. A design system was developed using an object-based software package that would quickly generate each flowpath in the study given the values of the geometric independent variables. These flowpath geometries were put into a hypersonic propulsion code and the engine performance was generated. The propulsion results were loaded into statistical software to produce regression equations that were combined with an aerodynamic database to optimize the flowpath at the vehicle performance level. For this example, the design process was executed twice. The first pass was a cursory look at the independent variables selected to determine which variables are the most important and to test all of the inputs to the optimization process. The second cycle is a more in-depth study with more cases and higher order equations representing the design space.
A New Design Method based on Cooperative Data Mining from Multi-Objective Design Space
NASA Astrophysics Data System (ADS)
Sugimura, Kazuyuki; Obayashi, Shigeru; Jeong, Shinkyu
We propose a new multi-objective parameter design method that uses the combination of the following data mining techniques: analysis of variance, self-organizing map, decision tree analysis, rough set theory, and association rule. This method first aims to improve multiple objective functions simultaneously using as much predominant main effects of different design variables as possible. Then it resolves the remaining conflictions between the objective functions using predominant interaction effects of design variables. The key to realizing this method is the obtaining of various design rules that quantitatively relate levels of design variables to levels of objective functions. Based on comparative studies of data mining techniques, the systematic processes for obtaining these design rules have been clarified, and the points of combining data mining techniques have also been summarized. This method has been applied to a multi-objective robust optimization problem of an industrial fan, and the results show its superior capabilities for controlling parameters to traditional single-objective parameter design methods like the Taguchi method.
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass
A method for the design of transonic flexible wings
NASA Technical Reports Server (NTRS)
Smith, Leigh Ann; Campbell, Richard L.
1990-01-01
Methodology was developed for designing airfoils and wings at transonic speeds which includes a technique that can account for static aeroelastic deflections. This procedure is capable of designing either supercritical or more conventional airfoil sections. Methods for including viscous effects are also illustrated and are shown to give accurate results. The methodology developed is an interactive system containing three major parts. A design module was developed which modifies airfoil sections to achieve a desired pressure distribution. This design module works in conjunction with an aerodynamic analysis module, which for this study is a small perturbation transonic flow code. Additionally, an aeroelastic module is included which determines the wing deformation due to the calculated aerodynamic loads. Because of the modular nature of the method, it can be easily coupled with any aerodynamic analysis code.
Approximate Design Method for Single Stage Pulse Tube Refrigerators
NASA Astrophysics Data System (ADS)
Pfotenhauer, J. M.; Gan, Z. H.; Radebaugh, R.
2008-03-01
An approximate design method is presented for the design of a single stage Stirling type pulse tube refrigerator. The design method begins from a defined cooling power, operating temperature, average and dynamic pressure, and frequency. Using a combination of phasor analysis, approximate correlations derived from extensive use of REGEN3.2, a few `rules of thumb,' and available models for inertance tubes, a process is presented to define appropriate geometries for the regenerator, pulse tube and inertance tube components. In addition, specifications for the acoustic power and phase between the pressure and flow required from the compressor are defined. The process enables an appreciation of the primary physical parameters operating within the pulse tube refrigerator, but relies on approximate values for the combined loss mechanisms. The defined geometries can provide both a useful starting point, and a sanity check, for more sophisticated design methodologies.
A new method named as Segment-Compound method of baffle design
NASA Astrophysics Data System (ADS)
Qin, Xing; Yang, Xiaoxu; Gao, Xin; Liu, Xishuang
2017-02-01
As the observation demand increased, the demand of the lens imaging quality rising. Segment- Compound baffle design method was proposed in this paper. Three traditional methods of baffle design they are characterized as Inside to Outside, Outside to Inside, and Mirror Symmetry. Through a transmission type of optical system, the four methods were used to design stray light suppression structure for it, respectively. Then, structures modeling simulation with Solidworks, CAXA, Tracepro, At last, point source transmittance (PST) curve lines were got to describe their performance. The result shows that the Segment- Compound method can inhibit stay light more effectively. Moreover, it is easy to active and without use special material.
Rotordynamics and Design Methods of an Oil-Free Turbocharger
NASA Technical Reports Server (NTRS)
Howard, Samuel A.
1999-01-01
The feasibility of supporting a turbocharger rotor on air foil bearings is investigated based upon predicted rotordynamic stability, load accommodations, and stress considerations. It is demonstrated that foil bearings offer a plausible replacement for oil-lubricated bearings in diesel truck turbochargers. Also, two different rotor configurations are analyzed and the design is chosen which best optimizes the desired performance characteristics. The method of designing machinery for foil bearing use and the assumptions made are discussed.
Methods for Reachability-based Hybrid Controller Design
2012-05-10
complexity of systems found in practical applications, the problem of controller design is often approached in a hierarchical fashion , with discrete...is often approached in a hierarchical fashion , with discrete abstractions and design methods used to satisfy high level task specifications, and...0720882 ( CSR - EHS: PRET), #0647591 ( CSR -SGER), and #0720841 ( CSR -CPS)), the U.S. Army Research Of- fice (ARO #W911NF-07-2-0019), U.S. Air Force Office
Mixed methods research design for pragmatic psychoanalytic studies.
Tillman, Jane G; Clemence, A Jill; Stevens, Jennifer L
2011-10-01
Calls for more rigorous psychoanalytic studies have increased over the past decade. The field has been divided by those who assert that psychoanalysis is properly a hermeneutic endeavor and those who see it as a science. A comparable debate is found in research methodology, where qualitative and quantitative methods have often been seen as occupying orthogonal positions. Recently, Mixed Methods Research (MMR) has emerged as a viable "third community" of research, pursuing a pragmatic approach to research endeavors through integrating qualitative and quantitative procedures in a single study design. Mixed Methods Research designs and the terminology associated with this emerging approach are explained, after which the methodology is explored as a potential integrative approach to a psychoanalytic human science. Both qualitative and quantitative research methods are reviewed, as well as how they may be used in Mixed Methods Research to study complex human phenomena.
ERSYS-SPP access method subsystem design specification
NASA Technical Reports Server (NTRS)
Weise, R. C. (Principal Investigator)
1980-01-01
The STARAN special purpose processor (SPP) is a machine allowing the same operation to be performed on up to 512 different data elements simultaneously. In the ERSYS system, it is to be attached to a 4341 plug compatible machine (PCM) to do certain existing algorithms and, at a later date, to perform other to be specified algorithms. That part of the interface between the 4341 PCM and the SPP located in the 4341 PCM is known as the SPP access method (SPPAM). Access to the SPPAM will be obtained by use of the NQUEUE and DQUEUE commands. The subsystem design specification is to incorporate all applicable design considerations from the ERSYS system design specification and the Level B requirements documents relating to the SPPAM. It is intended as a basis for the preliminary design review and will expand into the subsystem detailed design specification.
Design of large Francis turbine using optimal methods
NASA Astrophysics Data System (ADS)
Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.
2012-11-01
Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Improved method for transonic airfoil design-by-optimization
NASA Technical Reports Server (NTRS)
Kennelly, R. A., Jr.
1983-01-01
An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.
Design of an explosive detection system using Monte Carlo method.
Hernández-Adame, Pablo Luis; Medina-Castro, Diego; Rodriguez-Ibarra, Johanna Lizbeth; Salas-Luevano, Miguel Angel; Vega-Carrillo, Hector Rene
2016-11-01
Regardless the motivation terrorism is the most important risk for the national security in many countries. Attacks with explosives are the most common method used by terrorists. Therefore several procedures to detect explosives are utilized; among these methods are the use of neutrons and photons. In this study the Monte Carlo method an explosive detection system using a (241)AmBe neutron source was designed. In the design light water, paraffin, polyethylene, and graphite were used as moderators. In the work the explosive RDX was used and the induced gamma rays due to neutron capture in the explosive was estimated using NaI(Tl) and HPGe detectors. When light water is used as moderator and HPGe as the detector the system has the best performance allowing distinguishing between the explosive and urea. For the final design the Ambient dose equivalent for neutrons and photons were estimated along the radial and axial axis.
A method for the probabilistic design assessment of composite structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1994-01-01
A formal procedure for the probabilistic design assessment of a composite structure is described. The uncertainties in all aspects of a composite structure (constituent material properties, fabrication variables, structural geometry, service environments, etc.), which result in the uncertain behavior in the composite structural responses, are included in the assessment. The probabilistic assessment consists of design criteria, modeling of composite structures and uncertainties, simulation methods, and the decision making process. A sample case is presented to illustrate the formal procedure and to demonstrate that composite structural designs can be probabilistically assessed with accuracy and efficiency.
Inverse design of airfoils using a flexible membrane method
NASA Astrophysics Data System (ADS)
Thinsurat, Kamon
The Modified Garabedian Mc-Fadden (MGM) method is used to inversely design airfoils. The Finite Difference Method (FDM) for Non-Uniform Grids was developed to discretize the MGM equation for numerical solving. The Finite Difference Method (FDM) for Non-Uniform Grids has the advantage of being used flexibly with an unstructured grids airfoil. The commercial software FLUENT is being used as the flow solver. Several conditions are set in FLUENT such as subsonic inviscid flow, subsonic viscous flow, transonic inviscid flow, and transonic viscous flow to test the inverse design code for each condition. A moving grid program is used to create a mesh for new airfoils prior to importing meshes into FLUENT for the analysis of flows. For validation, an iterative process is used so the Cp distribution of the initial airfoil, the NACA0011, achieves the Cp distribution of the target airfoil, the NACA2315, for the subsonic inviscid case at M=0.2. Three other cases were carried out to validate the code. After the code validations, the inverse design method was used to design a shock free airfoil in the transonic condition and to design a separation free airfoil at a high angle of attack in the subsonic condition.
An uncertain multidisciplinary design optimization method using interval convex models
NASA Astrophysics Data System (ADS)
Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong
2013-06-01
This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas
1998-01-01
The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).
Exploration of Advanced Probabilistic and Stochastic Design Methods
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.
2003-01-01
The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and
Review of SMS design methods and real-world applications
NASA Astrophysics Data System (ADS)
Dross, Oliver; Mohedano, Ruben; Benitez, Pablo; Minano, Juan Carlos; Chaves, Julio; Blen, Jose; Hernandez, Maikel; Munoz, Fernando
2004-09-01
The Simultaneous Multiple Surfaces design method (SMS), proprietary technology of Light Prescription Innovators (LPI), was developed in the early 1990's as a two dimensional method. The first embodiments had either linear or rotational symmetry and found applications in photovoltaic concentrators, illumination optics and optical communications. SMS designed devices perform close to the thermodynamic limit and are compact and simple; features that are especially beneficial in applications with today's high brightness LEDs. The method was extended to 3D "free form" geometries in 1999 that perfectly couple two incoming with two outgoing wavefronts. SMS 3D controls the light emitted by an extended light source much better than single free form surface designs, while reaching very high efficiencies. This has enabled the SMS method to be applied to automotive head lamps, one of the toughest lighting tasks in any application, where high efficiency and small size are required. This article will briefly review the characteristics of both the 2D and 3D methods and will present novel optical solutions that have been developed and manufactured to meet real world problems. These include various ultra compact LED collimators, solar concentrators and highly efficient LED low and high beam headlamp designs.
A PDE Sensitivity Equation Method for Optimal Aerodynamic Design
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1996-01-01
The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Molecular library design using multi-objective optimization methods.
Nicolaou, Christos A; Kannas, Christos C
2011-01-01
Advancements in combinatorial chemistry and high-throughput screening technology have enabled the synthesis and screening of large molecular libraries for the purposes of drug discovery. Contrary to initial expectations, the increase in screening library size, typically combined with an emphasis on compound structural diversity, did not result in a comparable increase in the number of promising hits found. In an effort to improve the likelihood of discovering hits with greater optimization potential, more recent approaches attempt to incorporate additional knowledge to the library design process to effectively guide the search. Multi-objective optimization methods capable of taking into account several chemical and biological criteria have been used to design collections of compounds satisfying simultaneously multiple pharmaceutically relevant objectives. In this chapter, we present our efforts to implement a multi-objective optimization method, MEGALib, custom-designed to the library design problem. The method exploits existing knowledge, e.g. from previous biological screening experiments, to identify and profile molecular fragments used subsequently to design compounds compromising the various objectives.
Function combined method for design innovation of children's bike
NASA Astrophysics Data System (ADS)
Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan
2013-03-01
As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.
A Simple Method for High-Lift Propeller Conceptual Design
NASA Technical Reports Server (NTRS)
Patterson, Michael; Borer, Nick; German, Brian
2016-01-01
In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
An interdisciplinary heuristic evaluation method for universal building design.
Afacan, Yasemin; Erbug, Cigdem
2009-07-01
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
New Methods and Transducer Designs for Ultrasonic Diagnostics and Therapy
NASA Astrophysics Data System (ADS)
Rybyanets, A. N.; Naumenko, A. A.; Sapozhnikov, O. A.; Khokhlova, V. A.
Recent advances in the field of physical acoustics, imaging technologies, piezoelectric materials, and ultrasonic transducer design have led to emerging of novel methods and apparatus for ultrasonic diagnostics, therapy and body aesthetics. The paper presents the results on development and experimental study of different high intensity focused ultrasound (HIFU) transducers. Technological peculiarities of the HIFU transducer design as well as theoretical and numerical models of such transducers and the corresponding HIFU fields are discussed. Several HIFU transducers of different design have been fabricated using different advanced piezoelectric materials. Acoustic field measurements for those transducers have been performed using a calibrated fiber optic hydrophone and an ultrasonic measurement system (UMS). The results of ex vivo experiments with different tissues as well as in vivo experiments with blood vessels are presented that prove the efficacy, safety and selectivity of the developed HIFU transducers and methods.
New displacement-based methods for optimal truss topology design
NASA Technical Reports Server (NTRS)
Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.
1991-01-01
Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.
Multi-objective optimization methods in drug design.
Nicolaou, Christos A; Brown, Nathan
2013-09-01
Drug discovery is a challenging multi-objective problem where numerous pharmaceutically important objectives need to be adequately satisfied for a solution to be found. The problem is characterized by vast, complex solution spaces further perplexed by the presence of conflicting objectives. Multi-objective optimization methods, designed specifically to address such problems, have been introduced to the drug discovery field over a decade ago and have steadily gained in acceptance ever since. This paper reviews the latest multi-objective methods and applications reported in the literature, specifically in quantitative structure–activity modeling, docking, de novo design and library design. Further, the paper reports on related developments in drug discovery research and advances in the multi-objective optimization field.
Continuation methods in multiobjective optimization for combined structure control design
NASA Technical Reports Server (NTRS)
Milman, M.; Salama, M.; Scheid, R.; Bruno, R.; Gibson, J. S.
1990-01-01
A homotopy approach involving multiobjective functions is developed to outline the methods that have evolved for the combined control-structure optimization of physical systems encountered in the technology of large space structures. A method to effect a timely consideration of the control performance prior to the finalization of the structural design involves integrating the control and structural design processes into a unified design methodology that combines the two optimization problems into a single formulation. This study uses the combined optimization problem as a family of weighted structural and control costs. Connections with vector optimizations are described; an analysis of the zero-set of required conditions is made, and a numerical example is given.
Designs and Methods in School Improvement Research: A Systematic Review
ERIC Educational Resources Information Center
Feldhoff, Tobias; Radisch, Falk; Bischof, Linda Marie
2016-01-01
Purpose: The purpose of this paper is to focus on challenges faced by longitudinal quantitative analyses of school improvement processes and offers a systematic literature review of current papers that use longitudinal analyses. In this context, the authors assessed designs and methods that are used to analyze the relation between school…
Impact design methods for ceramic components in gas turbine engines
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1991-01-01
Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.
Impact design methods for ceramic components in gas turbine engines
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1991-01-01
Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.
Using Propensity Score Methods to Approximate Factorial Experimental Designs
ERIC Educational Resources Information Center
Dong, Nianbo
2011-01-01
The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…
Designs and Methods in School Improvement Research: A Systematic Review
ERIC Educational Resources Information Center
Feldhoff, Tobias; Radisch, Falk; Bischof, Linda Marie
2016-01-01
Purpose: The purpose of this paper is to focus on challenges faced by longitudinal quantitative analyses of school improvement processes and offers a systematic literature review of current papers that use longitudinal analyses. In this context, the authors assessed designs and methods that are used to analyze the relation between school…
Obtaining Valid Response Rates: Considerations beyond the Tailored Design Method.
ERIC Educational Resources Information Center
Huang, Judy Y.; Hubbard, Susan M.; Mulvey, Kevin P.
2003-01-01
Reports on the use of the tailored design method (TDM) to achieve high survey response in two separate studies of the dissemination of Treatment Improvement Protocols (TIPs). Findings from these two studies identify six factors may have influenced nonresponse, and show that use of TDM does not, in itself, guarantee a high response rate. (SLD)
Database design using NIAM (Nijssen Information Analysis Method) modeling
Stevens, N.H.
1989-01-01
The Nissjen Information Analysis Method (NIAM) is an information modeling technique based on semantics and founded in set theory. A NIAM information model is a graphical representation of the information requirements for some universe of discourse. Information models facilitate data integration and communication within an organization about data semantics. An information model is sometimes referred to as the semantic model or the conceptual schema. It helps in the logical and physical design and implementation of databases. NIAM information modeling is used at Sandia National Laboratories to design and implement relational databases containing engineering information which meet the users' information requirements. The paper focuses on the design of one database which satisfied the data needs of four disjoint but closely related applications. The applications as they existed before did not talk to each other even though they stored much of the same data redundantly. NIAM was used to determine the information requirements and design the integrated database. 6 refs., 7 figs.
NASA Astrophysics Data System (ADS)
Baladi, G. Y.
1988-12-01
The research quantified relationships between structural and material mix design parameters and documented a laboratory test procedure for examining mix design from a structural viewpoint. Laboratory asphalt mix design guidelines are presented. The guidelines are based upon the analysis of the results of laboratory static and cyclic load triaxial, indirect tensile, and flexural beam tests. The guidelines allow the highway engineer and the laboratory technician to tailor the asphalt mix design procedure to optimize the structural properties of the mix. Two mix design methods are covered: the Marshall mix design with minor modifications and the indirect tensile test. Analytical and statistical equations are also included to be able to calculate or estimate the structural properties of the mix.
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
Supersonic/hypersonic aerodynamic methods for aircraft design and analysis
NASA Technical Reports Server (NTRS)
Torres, Abel O.
1992-01-01
A methodology employed in engineering codes to predict aerodynamic characteristics over arbitrary supersonic/hypersonic configurations is considered. Engineering codes use a combination of simplified methods, based on geometrical impact angle and freestream conditions, to compute pressure distribution over the vehicle's surface in an efficient and timely manner. These approximate methods are valid for both hypersonic (Mach greater than 4) and lower speeds (Mach down to 2). It is concluded that the proposed methodology enables the user to obtain reasonable estimates of vehicle performance and engineering methods are valuable in the design process of these type of vehicles.
Guidance for using mixed methods design in nursing practice research.
Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia
2016-08-01
The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research.
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Comparison of methods for inverse design of radiant enclosures.
Franca, Francis; Larsen, Marvin Elwood; Howell, John R.; Daun, Kyle; Leduc, Guillaume
2005-03-01
A particular inverse design problem is proposed as a benchmark for comparison of five solution techniques used in design of enclosures with radiating sources. The enclosure is three-dimensional and includes some surfaces that are diffuse and others that are specular diffuse. Two aspect ratios are treated. The problem is completely described, and solutions are presented as obtained by the Tikhonov method, truncated singular value decomposition, conjugate gradient regularization, quasi-Newton minimization, and simulated annealing. All of the solutions use a common set of exchange factors computed by Monte Carlo, and smoothed by a constrained maximum likelihood estimation technique that imposes conservation, reciprocity, and non-negativity. Solutions obtained by the various methods are presented and compared, and the relative advantages and disadvantages of these methods are summarized.
Application of optical diffraction method in designing phase plates
NASA Astrophysics Data System (ADS)
Lei, Ze-Min; Sun, Xiao-Yan; Lv, Feng-Nian; Zhang, Zhen; Lu, Xing-Qiang
2016-11-01
Continuous phase plate (CPP), which has a function of beam shaping in laser systems, is one kind of important diffractive optics. Based on the Fourier transform of the Gerchberg-Saxton (G-S) algorithm for designing CPP, we proposed an optical diffraction method according to the real system conditions. A thin lens can complete the Fourier transform of the input signal and the inverse propagation of light can be implemented in a program. Using both of the two functions can realize the iteration process to calculate the near-field distribution of light and the far-field repeatedly, which is similar to the G-S algorithm. The results show that using the optical diffraction method can design a CPP for a complicated laser system, and make the CPP have abilities of beam shaping and phase compensation for the phase aberration of the system. The method can improve the adaptation of the phase plate in systems with phase aberrations.
Designing waveforms for temporal encoding using a frequency sampling method.
Gran, Fredrik; Jensen, Jørgen Arendt
2007-10-01
In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method, the amplitude spectrum of the transmitted waveform can be optimized, such that most of the energy is transmitted where the transducer has large amplification. To test the design method, a waveform was designed for a BK8804 linear array transducer. The resulting nonlinear frequency modulated waveform was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-to-noise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate the method experimentally. Due to the careful waveform design optimized for the transducer at hand, a theoretic gain in signal-to-noise ratio of 4.9 dB compared to the reference excitation was found, even though the energy of the nonlinear frequency modulated signal was 71% of the energy of the reference signal. This was supported by a signal-to-noise ratio measurement and comparison in penetration depth, where an increase of 1 cm was found in favor for the proposed waveform. Axial and lateral resolutions at full-width half-maximum were compared in a water phantom at depths of 42, 62, 82, and 102 mm. The axial resolutions of the nonlinear frequency modulated signal were 0.62, 0.69, 0.60, and 0.60 mm, respectively. The corresponding axial resolutions for the reference
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Optimal pulse design in quantum control: A unified computational method
Li, Jr-Shin; Ruths, Justin; Yu, Tsyr-Yan; Arthanari, Haribabu; Wagner, Gerhard
2011-01-01
Many key aspects of control of quantum systems involve manipulating a large quantum ensemble exhibiting variation in the value of parameters characterizing the system dynamics. Developing electromagnetic pulses to produce a desired evolution in the presence of such variation is a fundamental and challenging problem in this research area. We present such robust pulse designs as an optimal control problem of a continuum of bilinear systems with a common control function. We map this control problem of infinite dimension to a problem of polynomial approximation employing tools from geometric control theory. We then adopt this new notion and develop a unified computational method for optimal pulse design using ideas from pseudospectral approximations, by which a continuous-time optimal control problem of pulse design can be discretized to a constrained optimization problem with spectral accuracy. Furthermore, this is a highly flexible and efficient numerical method that requires low order of discretization and yields inherently smooth solutions. We demonstrate this method by designing effective broadband π/2 and π pulses with reduced rf energy and pulse duration, which show significant sensitivity enhancement at the edge of the spectrum over conventional pulses in 1D and 2D NMR spectroscopy experiments. PMID:21245345
Novel TMS coils designed using an inverse boundary element method
NASA Astrophysics Data System (ADS)
Cobos Sánchez, Clemente; María Guerrero Rodriguez, Jose; Quirós Olozábal, Ángel; Blanco-Navarro, David
2017-01-01
In this work, a new method to design TMS coils is presented. It is based on the inclusion of the concept of stream function of a quasi-static electric current into a boundary element method. The proposed TMS coil design approach is a powerful technique to produce stimulators of arbitrary shape, and remarkably versatile as it permits the prototyping of many different performance requirements and constraints. To illustrate the power of this approach, it has been used for the design of TMS coils wound on rectangular flat, spherical and hemispherical surfaces, subjected to different constraints, such as minimum stored magnetic energy or power dissipation. The performances of such coils have been additionally described; and the torque experienced by each stimulator in the presence of a main magnetic static field have theoretically found in order to study the prospect of using them to perform TMS and fMRI concurrently. The obtained results show that described method is an efficient tool for the design of TMS stimulators, which can be applied to a wide range of coil geometries and performance requirements.
Non-contact electromagnetic exciter design with linear control method
NASA Astrophysics Data System (ADS)
Wang, Lin; Xiong, Xianzhi; Xu, Hua
2017-01-01
A non-contact type force actuator is necessary for studying the dynamic performance of a high-speed spindle system owing to its high-speed operating conditions. A non-contact electromagnetic exciter is designed for identifying the dynamic coefficients of journal bearings in high-speed grinding spindles. A linear force control method is developed based on PID controller. The influence of amplitude and frequency of current, misalignment and rotational speed on magnetic field and excitation force is investigated based on two-dimensional finite element analysis. The electromagnetic excitation force is measured with the auxiliary coils and calibrated by load cells. The design is validated by the experimental results. Theoretical and experimental investigations show that the proposed design can accurately generate linear excitation force with sufficiently large amplitude and higher signal to noise ratio. Moreover, the fluctuations in force amplitude are reduced to a greater extent with the designed linear control method even when the air gap changes due to the rotor vibration at high-speed conditions. Besides, it is possible to apply various types of excitations: constant, synchronous, and non-synchronous excitation forces based on the proposed linear control method. This exciter can be used as linear-force exciting and controlling system for dynamic performance study of different high-speed rotor-bearing systems.
Design method of coaxial reflex hollow beam generator
NASA Astrophysics Data System (ADS)
Wang, Jiake; Xu, Jia; Fu, Yuegang; He, Wenjun; Zhu, Qifan
2016-10-01
In view of the light energy loss in central obscuration of coaxial reflex optical system, the design method of a kind of hollow beam generator is introduced. First of all, according to the geometrical parameter and obscuration ratio of front-end coaxial reflex optical system, calculate the required physical dimension of hollow beam, and get the beam expanding rate of the hollow beam generator according to the parameters of the light source. Choose the better enlargement ratio of initial expanding system using the relational expression of beam expanding rate and beam expanding rate of initial system; the traditional design method of the reflex optical system is used to design the initial optical system, and then the position of rotation axis of the hollow beam generator can be obtained through the rotation axis translation formula. Intercept the initial system bus bar using the rotation axis after the translation, and rotate the bus bar around the rotation axis for 360°, so that two working faces of the hollow beam generator can be got. The hollow beam generator designed by this method can get the hollow beam that matches the front-end coaxial reflex optical system, improving the energy utilization ratio of beam and effectively reducing the back scattering of transmission system.
The future of prodrugs - design by quantum mechanics methods.
Karaman, Rafik; Fattash, Beesan; Qtait, Alaa
2013-05-01
The revolution in computational chemistry greatly impacted the drug design and delivery fields, in general, and recently the utilization of the prodrug approach in particular. The use of ab initio, semiempirical and molecular mechanics methods to understand organic reaction mechanisms of certain processes, especially intramolecular reactions, has opened the door to design and to rapidly produce safe and efficacious delivery of a wide range of active small molecule and biotherapeutics such as prodrugs. This article provides the readers with a concise overview of this modern approach to prodrug design. The use of computational approaches, such as density functional theory (DFT), semiempirical and ab initio molecular orbital methods, in modern prodrugs design will be discussed. The novel prodrug approach to be reported in this review implies prodrug design based on enzyme model (mimicking enzyme catalysis) that has been utilized to understand how enzymes work. The tool used in the design is a computational approach consisting of calculations using molecular orbital and molecular mechanics methods (DFT, ab initio and MM2) and correlations between experimental and calculated values of intramolecular processes that were used to understand the mechanism by which enzymes might exert their high rates catalysis. The future of prodrug technology is exciting yet extremely challenging. Advances must be made in understanding the chemistry of many organic reactions that can be effectively utilized to enable the development of even more types of prodrugs. Despite the increase in the number of marketed prodrugs, we have only started to appreciate the potential of the prodrug approach in modern drug development, and the coming years will witness many novel prodrug innovations.
The characterization of kerogen-analytical limitations and method design
Larter, S.R.
1987-04-01
Methods suitable for high resolution total molecular characterization of kerogens and other polymeric SOM are necessary for a quantitative understanding of hydrocarbon maturation and migration phenomena in addition to being a requirement for a systematic understanding of kerogen based fuel utilization. Gas chromatographic methods, in conjunction with analytical pyrolysis methods, have proven successful in the rapid superficial characterization of kerogen pyrolysates. Most applications involve qualitative or semi-quantitative assessment of the relative concentration of aliphatic, aromatic, or oxygen-containing species in a kerogen pyrolysate. More recently, the use of alkylated polystyrene internal standards has allowed the direct determination of parameters related to the abundance of, for example, normal alkyl groups or single ring aromatic species in kerogens. The future of methods of this type for improved kerogen typing is critically discussed. The conceptual design and feasibility of methods suitable for the more complete characterization of complex geopolymers on the molecular level is discussed with practical examples.
Design of transonic compressor cascades using hodograph method
NASA Technical Reports Server (NTRS)
Chen, Zuoyi; Guo, Jingrong
1991-01-01
The use of the Hodograph Method in the design of a transonic compressor cascade is discussed. The design of the flow mode in the transonic compressor cascade must be as follows: the flow in the nozzle part should be uniform and smooth; the location of the sonic line should be reasonable; and the aerodynamic character of the flow canal in the subsonic region should be met. The rate through cascade may be determined by the velocity distribution in the subsonic region (i.e., by the numerical solution of the Chaplygin equation). The supersonic sections A'C' and AD are determined by the analytical solution of the Mixed-Type Hodograph equation.
Rays inserting method (RIM) to design dielectric optical devices
NASA Astrophysics Data System (ADS)
Taskhiri, Mohammad Mahdi; Khalaj Amirhosseini, Mohammad
2017-01-01
In this article, a novel approach, called Rays Inserted Method (RIM), is introduced to design dielectric optical devices. In this approach, some rays are inserted between two ends of desired device and then the refractive index of the points along the route of rays are obtained. The validity of the introduced approach is verified by designing three types of optical devices, i.e. power splitter, bend, and flat lens. The results are confirmed with numerical simulations by the means of FDTD scheme at the frequency of 100 GHz.
Current methods of epitope identification for cancer vaccine design.
Cherryholmes, Gregory A; Stanton, Sasha E; Disis, Mary L
2015-12-16
The importance of the immune system in tumor development and progression has been emerging in many cancers. Previous cancer vaccines have not shown long-term clinical benefit possibly because were not designed to avoid eliciting regulatory T-cell responses that inhibit the anti-tumor immune response. This review will examine different methods of identifying epitopes derived from tumor associated antigens suitable for immunization and the steps used to design and validate peptide epitopes to improve efficacy of anti-tumor peptide-based vaccines. Focusing on in silico prediction algorithms, we survey the advantages and disadvantages of current cancer vaccine prediction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.
Material Design, Selection, and Manufacturing Methods for System Sustainment
David Sowder, Jim Lula, Curtis Marshall
2010-02-18
This paper describes a material selection and validation process proven to be successful for manufacturing high-reliability long-life product. The National Secure Manufacturing Center business unit of the Kansas City Plant (herein called KCP) designs and manufactures complex electrical and mechanical components used in extreme environments. The material manufacturing heritage is founded in the systems design to manufacturing practices that support the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA). Material Engineers at KCP work with the systems designers to recommend materials, develop test methods, perform analytical analysis of test data, define cradle to grave needs, present final selection and fielding. The KCP material engineers typically will maintain cost control by utilizing commercial products when possible, but have the resources and to develop and produce unique formulations as necessary. This approach is currently being used to mature technologies to manufacture materials with improved characteristics using nano-composite filler materials that will enhance system design and production. For some products the engineers plan and carry out science-based life-cycle material surveillance processes. Recent examples of the approach include refurbished manufacturing of the high voltage power supplies for cockpit displays in operational aircraft; dry film lubricant application to improve bearing life for guided munitions gyroscope gimbals, ceramic substrate design for electrical circuit manufacturing, and tailored polymeric materials for various systems. The following examples show evidence of KCP concurrent design-to-manufacturing techniques used to achieve system solutions that satisfy or exceed demanding requirements.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.
1987-01-01
This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.
1987-01-01
This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.
Unified computational method for design of fluid loop systems
NASA Astrophysics Data System (ADS)
Furukawa, Masao
1991-12-01
Various kinds of empirical formulas of Nusselt numbers, fanning friction factors, and pressure loss coefficients were collected and reviewed with the object of constructing a common basis of design calculations of pumped fluid loop systems. The practical expressions obtained after numerical modifications are listed in tables with identification numbers corresponding to configurations of the flow passages. Design procedure of a cold plate and of a space radiator are clearly shown in a series of mathematical relations coupled with a number of detailed expressions which are put in the tables in order of numerical computations. Weight estimate models and several pump characteristics are given in the tables as a result of data regression. A unified computational method based upon the above procedure is presented for preliminary design analyses of a fluid loop system consisting of cold plates, plane radiators, mechanical pumps, valves, and so on.
USER-derived cloning methods and their primer design.
Salomonsen, Bo; Mortensen, Uffe H; Halkier, Barbara A
2014-01-01
Uracil excision-based cloning through USER™ (Uracil-Specific Excision Reagent) is an efficient ligase-free cloning technique that comprises USER cloning, USER fusion, and USER cassette-free (UCF) USER fusion. These USER-derived cloning techniques enable seamless assembly of multiple DNA fragments in one construct. Though governed by a few simple rules primer design for USER-based fusion of PCR fragments can prove time-consuming for inexperienced users. The Primer Help for USER (PHUSER) software is an easy-to-use primer design tool for USER-based methods. In this chapter, we present a PHUSER software protocol for designing primers for USER-derived cloning techniques.
A simple design method of negative refractive index metamaterials
NASA Astrophysics Data System (ADS)
Kim, Dongho; Lee, Wangju; Choi, Jaeick
2009-11-01
We propose a very simple design method of negative refractive index (NRI) materials that can overcome some drawbacks of conventional resonant-type NRI materials. The proposed NRI materials consist of single or double metallic patterns printed on a dielectric substrate. Our metamaterials (MTMs) show two properties that are different from other types of MTMs in obtaining effective negative values of permittivity ( ɛ) and permeability ( μ) simultaneously; the geometrical outlines of the metallic patterns are not confined within any specific shape, and the metallic patterns are printed on only one side of the dielectric substrate. Therefore, they are very easy to design and fabricate using common printed circuit board (PCB) technology according to the appropriate application. Excellent agreement between the experiment and prediction data ensures the validity of our design approach.
Applying Human-Centered Design Methods to Scientific Communication Products
NASA Astrophysics Data System (ADS)
Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.
2016-12-01
Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent... of lead (Pb) in the ambient air. FOR FURTHER INFORMATION CONTACT: Robert Vanderpool, Human Exposure... CFR Part 53, the EPA evaluates various methods for monitoring the concentrations of those ambient...
Helicopter flight-control design using an H(2) method
NASA Technical Reports Server (NTRS)
Takahashi, Marc D.
1991-01-01
Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.
Optical design and active optics methods in astronomy
NASA Astrophysics Data System (ADS)
Lemaitre, Gerard R.
2013-03-01
Optical designs for astronomy involve implementation of active optics and adaptive optics from X-ray to the infrared. Developments and results of active optics methods for telescopes, spectrographs and coronagraph planet finders are presented. The high accuracy and remarkable smoothness of surfaces generated by active optics methods also allow elaborating new optical design types with high aspheric and/or non-axisymmetric surfaces. Depending on the goal and performance requested for a deformable optical surface analytical investigations are carried out with one of the various facets of elasticity theory: small deformation thin plate theory, large deformation thin plate theory, shallow spherical shell theory, weakly conical shell theory. The resulting thickness distribution and associated bending force boundaries can be refined further with finite element analysis.
National Tuberculosis Genotyping and Surveillance Network: Design and Methods
Braden, Christopher R.; Schable, Barbara A.; Onorato, Ida M.
2002-01-01
The National Tuberculosis Genotyping and Surveillance Network was established in 1996 to perform a 5-year, prospective study of the usefulness of genotyping Mycobacterium tuberculosis isolates to tuberculosis control programs. Seven sentinel sites identified all new cases of tuberculosis, collected information on patients and contacts, and obtained patient isolates. Seven genotyping laboratories performed DNA fingerprinting analysis by the international standard IS6110 method. BioImage Whole Band Analyzer software was used to analyze patterns, and distinct patterns were assigned unique designations. Isolates with six or fewer bands on IS6110 patterns were also spoligotyped. Patient data and genotyping designations were entered in a relational database and merged with selected variables from the national surveillance database. In two related databases, we compiled the results of routine contact investigations and the results of investigations of the relationships of patients who had isolates with matching genotypes. We describe the methods used in the study. PMID:12453342
Simplified Analysis Methods for Primary Load Designs at Elevated Temperatures
Carter, Peter; Jetter, Robert I; Sham, Sam
2011-01-01
The use of simplified (reference stress) analysis methods is discussed and illustrated for primary load high temperature design. Elastic methods are the basis of the ASME Section III, Subsection NH primary load design procedure. There are practical drawbacks with this approach, particularly for complex geometries and temperature gradients. The paper describes an approach which addresses these difficulties through the use of temperature-dependent elastic-perfectly plastic analysis. Correction factors are defined to address difficulties traditionally associated with discontinuity stresses, inelastic strain concentrations and multiaxiality. A procedure is identified to provide insight into how this approach could be implemented but clearly there is additional work to be done to define and clarify the procedural steps to bring it to the point where it could be adapted into code language.
Preliminary demonstration of a robust controller design method
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1980-01-01
Alternative computational procedures for obtaining a feedback control law which yields a control signal based on measurable quantitites are evaluated. The three methods evaluated are: (1) the standard linear quadratic regulator design model; (2) minimization of the norm of the feedback matrix, k via nonlinear programming subject to the constraint that the closed loop eigenvalues be in a specified domain in the complex plane; and (3) maximize the angles between the closed loop eigenvectors in combination with minimizing the norm of K also via the constrained nonlinear programming. The third or robust design method was chosen to yield a closed loop system whose eigenvalues are insensitive to small changes in the A and B matrices. The relationship between orthogonality of closed loop eigenvectors and the sensitivity of closed loop eigenvalues is described. Computer programs are described.
Attenuator design method for dedicated whole-core CT.
Li, Mengfei; Zhao, Yunsong; Zhang, Peng
2016-10-03
In whole-core CT imaging, scanned data corresponding to the central portion of a cylindrical core often suffer from photon starvation, because increasing photon flux will cause overflow on some detector units under the restriction of detector dynamic range. Either photon starvation or data overflow will lead to increased noise or severe artifacts in the reconstructed CT image. In addition, cupping shaped beam hardening artifacts also appear in the whole-core CT image. In this paper, we present a method to design an attenuator for cone beam whole-core CT, which not only reduces the dynamic range requirement for high SNR data scanning, but also corrects beam hardening artifacts. Both simulation and real data are employed to verify our design method.
A Requirements-Driven Optimization Method for Acoustic Treatment Design
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2016-01-01
Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.
A method for the aerodynamic design of dry powder inhalers.
Ertunç, O; Köksoy, C; Wachtel, H; Delgado, A
2011-09-15
An inhaler design methodology was developed and then used to design a new dry powder inhaler (DPI) which aimed to fulfill two main performance requirements. The first requirement was that the patient should be able to completely empty the dry powder from the blister in which it is stored by inspiratory effort alone. The second requirement was that the flow resistance of the inhaler should be geared to optimum patient comfort. The emptying of a blister is a two-phase flow problem, whilst the adjustment of the flow resistance is an aerodynamic design problem. The core of the method comprised visualization of fluid and particle flow in upscaled prototypes operated in water. The prototypes and particles were upscaled so that dynamic similarity conditions were approximated as closely as possible. The initial step in the design method was to characterize different blister prototypes by measurements of their flow resistance and particle emptying performance. The blisters were then compared with regard to their aerodynamic performance and their ease of production. Following selection of candidate blisters, the other components such as needle, bypass and mouthpiece were dimensioned on the basis of node-loop operations and validation experiments. The final shape of the inhaler was achieved by experimental iteration. Copyright © 2011 Elsevier B.V. All rights reserved.
Application of an optimization method to high performance propeller designs
NASA Technical Reports Server (NTRS)
Li, K. C.; Stefko, G. L.
1984-01-01
The application of an optimization method to determine the propeller blade twist distribution which maximizes propeller efficiency is presented. The optimization employs a previously developed method which has been improved to include the effects of blade drag, camber and thickness. Before the optimization portion of the computer code is used, comparisons of calculated propeller efficiencies and power coefficients are made with experimental data for one NACA propeller at Mach numbers in the range of 0.24 to 0.50 and another NACA propeller at a Mach number of 0.71 to validate the propeller aerodynamic analysis portion of the computer code. Then comparisons of calculated propeller efficiencies for the optimized and the original propellers show the benefits of the optimization method in improving propeller performance. This method can be applied to the aerodynamic design of propellers having straight, swept, or nonplanar propeller blades.
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)
2000-01-01
A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.
Computational methods for drug design and discovery: focus on China.
Zheng, Mingyue; Liu, Xian; Xu, Yuan; Li, Honglin; Luo, Cheng; Jiang, Hualiang
2013-10-01
In the past decades, China's computational drug design and discovery research has experienced fast development through various novel methodologies. Application of these methods spans a wide range, from drug target identification to hit discovery and lead optimization. In this review, we firstly provide an overview of China's status in this field and briefly analyze the possible reasons for this rapid advancement. The methodology development is then outlined. For each selected method, a short background precedes an assessment of the method with respect to the needs of drug discovery, and, in particular, work from China is highlighted. Furthermore, several successful applications of these methods are illustrated. Finally, we conclude with a discussion of current major challenges and future directions of the field.
A method of designing clinical trials for combination drugs.
Pigeon, J G; Copenhaver, M D; Whipple, J P
1992-06-15
Many pharmaceutical companies are now exploring combination drug therapies as an alternative to monotherapy. Consequently, it is of interest to investigate the simultaneous dose response relationship of two active drugs to select the lowest effective combination. In this paper, we propose a method for designing clinical trials for drug combinations that seems to offer several advantages over the 4 x 3 or even larger factorial studies that have been used to date. In addition, our proposed method provides a convenient formula for calculating the required sample size.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Uncertainty-Based Design Methods for Flow-Structure Interactions
2007-06-01
07 Final _ 2/01/05 - 01/31/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Uncertainty-based Design Methods for Flow- N00014-04-1-0007 Structure ...project is to develop advanced tools for efficient simulations of flow- structure interactions that account for random excitation and uncertain input...with emphasis on realistic three-dimensional nonlinear representatiol of the structures of interest. This capability will set the foundation for the
A design method for constellation of lifting reentry vehicles
NASA Astrophysics Data System (ADS)
Xiang, Yu; Kun, Liu
2017-03-01
As the reachable domain of a single lifting reentry vehicle is not large enough to cover the whole globe in a short time, which is disadvantageous to responsive operation, it is of great significance to study on how to construct a constellation of several lifting reentry vehicles to responsively reach any point of the globe. This paper addresses a design method for such a constellation. Firstly, an approach for calculating the reachable domain of a single lifting reentry vehicle is given, using the combination of Gauss Pseudospectral Method and SQP method. Based on that, the entire reachable domain taking the limit of responsive time into consideration is simplified reasonably to reduce the complexity of the problem. Secondly, a Streets-of-Coverage (SOC) method is used to design the constellation and the parameters of the constellation are optimized through simple analysis and comparison. Lastly, a point coverage simulation method is utilized to verify the correctness of the optimization result. The verified result shows that 6 lifting reentry vehicles whose maximum lift-to-drag ratio is 1.7 can reach nearly any point on the earth's surface between -50° and 50° in less than 90 minutes.
Design Methods for Load-bearing Elements from Crosslaminated Timber
NASA Astrophysics Data System (ADS)
Vilguts, A.; Serdjuks, D.; Goremikins, V.
2015-11-01
Cross-laminated timber is an environmentally friendly material, which possesses a decreased level of anisotropy in comparison with the solid and glued timber. Cross-laminated timber could be used for load-bearing walls and slabs of multi-storey timber buildings as well as decking structures of pedestrian and road bridges. Design methods of cross-laminated timber elements subjected to bending and compression with bending were considered. The presented methods were experimentally validated and verified by FEM. Two cross-laminated timber slabs were tested at the action of static load. Pine wood was chosen as a board's material. Freely supported beam with the span equal to 1.9 m, which was loaded by the uniformly distributed load, was a design scheme of the considered plates. The width of the plates was equal to 1 m. The considered cross-laminated timber plates were analysed by FEM method. The comparison of stresses acting in the edge fibres of the plate and the maximum vertical displacements shows that both considered methods can be used for engineering calculations. The difference between the results obtained experimentally and analytically is within the limits from 2 to 31%. The difference in results obtained by effective strength and stiffness and transformed sections methods was not significant.
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods.
Gradient-based optimum aerodynamic design using adjoint methods
NASA Astrophysics Data System (ADS)
Xie, Lei
2002-09-01
Continuous adjoint methods and optimal control theory are applied to a pressure-matching inverse design problem of quasi 1-D nozzle flows. Pontryagin's Minimum Principle is used to derive the adjoint system and the reduced gradient of the cost functional. The properties of adjoint variables at the sonic throat and the shock location are studied, revealing a log-arithmic singularity at the sonic throat and continuity at the shock location. A numerical method, based on the Steger-Warming flux-vector-splitting scheme, is proposed to solve the adjoint equations. This scheme can finely resolve the singularity at the sonic throat. A non-uniform grid, with points clustered near the throat region, can resolve it even better. The analytical solutions to the adjoint equations are also constructed via Green's function approach for the purpose of comparing the numerical results. The pressure-matching inverse design is then conducted for a nozzle parameterized by a single geometric parameter. In the second part, the adjoint methods are applied to the problem of minimizing drag coefficient, at fixed lift coefficient, for 2-D transonic airfoil flows. Reduced gradients of several functionals are derived through application of a Lagrange Multiplier Theorem. The adjoint system is carefully studied including the adjoint characteristic boundary conditions at the far-field boundary. A super-reduced design formulation is also explored by treating the angle of attack as an additional state; super-reduced gradients can be constructed either by solving adjoint equations with non-local boundary conditions or by a direct Lagrange multiplier method. In this way, the constrained optimization reduces to an unconstrained design problem. Numerical methods based on Jameson's finite volume scheme are employed to solve the adjoint equations. The same grid system generated from an efficient hyperbolic grid generator are adopted in both the Euler flow solver and the adjoint solver. Several
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Development of quality-by-design analytical methods.
Vogt, Frederick G; Kord, Alireza S
2011-03-01
Quality-by-design (QbD) is a systematic approach to drug development, which begins with predefined objectives, and uses science and risk management approaches to gain product and process understanding and ultimately process control. The concept of QbD can be extended to analytical methods. QbD mandates the definition of a goal for the method, and emphasizes thorough evaluation and scouting of alternative methods in a systematic way to obtain optimal method performance. Candidate methods are then carefully assessed in a structured manner for risks, and are challenged to determine if robustness and ruggedness criteria are satisfied. As a result of these studies, the method performance can be understood and improved if necessary, and a control strategy can be defined to manage risk and ensure the method performs as desired when validated and deployed. In this review, the current state of analytical QbD in the industry is detailed with examples of the application of analytical QbD principles to a range of analytical methods, including high-performance liquid chromatography, Karl Fischer titration for moisture content, vibrational spectroscopy for chemical identification, quantitative color measurement, and trace analysis for genotoxic impurities.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Bayesian methods for the design and analysis of noninferiority trials.
Gamalo-Siebers, Margaret; Gao, Aijun; Lakshminarayanan, Mani; Liu, Guanghan; Natanegara, Fanni; Railkar, Radha; Schmidli, Heinz; Song, Guochen
2016-01-01
The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.
Analytical methods for gravity-assist tour design
NASA Astrophysics Data System (ADS)
Strange, Nathan J.
This dissertation develops analytical methods for the design of gravity-assist space- craft trajectories. Such trajectories are commonly employed by planetary science missions to reach Mercury or the Outer Planets. They may also be used at the Outer Planets for the design of science tours with multiple flybys of those planets' moons. Recent work has also shown applicability to new missions concepts such as NASA's Asteroid Redirect Mission. This work is based in the theory of patched conics. This document applies rigor to the concept of pumping (i.e. using gravity assists to change orbital energy) and cranking (i.e. using gravity assists to change inclination) to develop several analytic relations with pump and crank angles. In addition, transformations are developed between pump angle, crank angle, and v-infinity magnitude to classical orbit elements. These transformations are then used to describe the limits on orbits achievable via gravity assists of a planet or moon. This is then extended to develop analytic relations for all possible ballistic gravity-assist transfers and one type of propulsive transfer, v-infinity leveraging transfers. The results in this dissertation complement existing numerical methods for the design of these trajectories by providing methods that can guide numerical searches to find promising trajectories and even, in some cases, replace numerical searches altogether. In addition, results from new techniques presented in this dissertation such as Tisserand Graphs, the V-Infinity Globe, and Non-Tangent V-Infinty Leveraging provide additional insight into the structure of the gravity-assist trajectory design problem.
Improved Method of Design for Folding Inflatable Shells
NASA Technical Reports Server (NTRS)
Johnson, Christopher J.
2009-01-01
An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One particularly difficult problem is that of mathematically defining fold lines on a gore pattern in a double- curvature region. Moreover, because the fold lines in a double-curvature region tend to be curved, there is a practical problem of how to implement the folds. Another problem is that of modifying the basic gore shapes and sizes for the various layers so that when they are folded as part of the integral structure, they do not mechanically interfere with each other at the fold lines. Heretofore, it has been a common practice to design an inflatable shell to be assembled in the deployed configuration, without regard for the need to fold it into compact form. Typically, the result has been that folding has been a difficult, time-consuming process resulting in a An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One
Measuring the design of empathetic buildings: a review of universal design evaluation methods.
O Shea, Eoghan Conor; Pavia, Sara; Dyer, Mark; Craddock, Gerald; Murphy, Neil
2016-01-01
Universal design (UD) provides an explanation of good design based on the user perspective, which are outlined through its principles, goals, and related frameworks. The aim of this paper is to provide an overview of the frameworks and methods for UD building evaluations and to describe how close they have come to describing what a universally designed building is. Evaluation approaches are reviewed from the existing literature across a number of spatial disciplines, including UD, human geography and urban studies. Four categories of UD evaluation methods are outlined, including (1) checklist evaluations, (2) value-driven evaluations, (3) holistic evaluations, and (4) invisible evaluations. A number of suggestions are made to aid research aimed at developing UD evaluation in buildings. (1) Design standards and guidelines should be contested or validated where possible; (2) evaluation criteria should be contextual; (3) it may be more practical to have separate methodologies for contextualising UD to allow for the creation of an evaluating tool that is practical in use. Additionally, there is a difficulty in establishing a clear basis for evaluating how empathetic buildings are without expanding the methodological horizons of UD evaluation. Implications for Rehabilitation For universal design (UD) evaluation to address human need requires methods that are culturally, temporally, and typologically specific. Practical instruments for measuring UD need to be divorced from but contingent upon methods than can address local specificities. The process of evaluation can provide knowledge that can contest or validate the literature based sources such as design guidelines, or standards. UD evaluation requires constant renewal by searching for new, flexible strategies that can respond to socio-cultural change.
An experimental design method leading to chemical Turing patterns.
Horváth, Judit; Szalai, István; De Kepper, Patrick
2009-05-08
Chemical reaction-diffusion patterns often serve as prototypes for pattern formation in living systems, but only two isothermal single-phase reaction systems have produced sustained stationary reaction-diffusion patterns so far. We designed an experimental method to search for additional systems on the basis of three steps: (i) generate spatial bistability by operating autoactivated reactions in open spatial reactors; (ii) use an independent negative-feedback species to produce spatiotemporal oscillations; and (iii) induce a space-scale separation of the activatory and inhibitory processes with a low-mobility complexing agent. We successfully applied this method to a hydrogen-ion autoactivated reaction, the thiourea-iodate-sulfite (TuIS) reaction, and noticeably produced stationary hexagonal arrays of spots and parallel stripes of pH patterns attributed to a Turing bifurcation. This method could be extended to biochemical reactions.
Design of time interval generator based on hybrid counting method
NASA Astrophysics Data System (ADS)
Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge
2016-10-01
Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.
Sequence design in lattice models by graph theoretical methods
NASA Astrophysics Data System (ADS)
Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.
2001-01-01
A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).
Designing A Mixed Methods Study In Primary Care
Creswell, John W.; Fetters, Michael D.; Ivankova, Nataliya V.
2004-01-01
BACKGROUND Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. METHODS We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. RESULTS Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. DISCUSSION We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research. PMID:15053277
Modified method to improve the design of Petlyuk distillation columns
2014-01-01
Background A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. Results The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. Conclusions The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads. PMID:25061476
Modified method to improve the design of Petlyuk distillation columns.
Zapiain-Salinas, Javier G; Barajas-Fernández, Juan; González-García, Raúl
2014-01-01
A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Rapid and simple method of qPCR primer design.
Thornton, Brenda; Basu, Chhandak
2015-01-01
Quantitative real-time polymerase chain reaction (qPCR) is a powerful tool for analysis and quantification of gene expression. It is advantageous compared to traditional gel-based method of PCR, as gene expression can be visualized "real-time" using a computer. In qPCR, a reporter dye system is used which intercalates with DNA's region of interest and detects DNA amplification. Some of the popular reporter systems used in qPCR are the following: Molecular Beacon(®), SYBR Green(®), and Taqman(®). However, success of qPCR depends on the optimal primers used. Some of the considerations for primer design are the following: GC content, primer self-dimer, or secondary structure formation. Freely available software could be used for ideal qPCR primer design. Here we have shown how to use some freely available web-based software programs (such as Primerquest(®), Unafold(®), and Beacon designer(®)) to design qPCR primers.
A geometric method for optimal design of color filter arrays.
Hao, Pengwei; Li, Yan; Lin, Zhouchen; Dubois, Eric
2011-03-01
A color filter array (CFA) used in a digital camera is a mosaic of spectrally selective filters, which allows only one color component to be sensed at each pixel. The missing two components of each pixel have to be estimated by methods known as demosaicking. The demosaicking algorithm and the CFA design are crucial for the quality of the output images. In this paper, we present a CFA design methodology in the frequency domain. The frequency structure, which is shown to be just the symbolic DFT of the CFA pattern (one period of the CFA), is introduced to represent images sampled with any rectangular CFAs in the frequency domain. Based on the frequency structure, the CFA design involves the solution of a constrained optimization problem that aims at minimizing the demosaicking error. To decrease the number of parameters and speed up the parameter searching, the optimization problem is reformulated as the selection of geometric points on the boundary of a convex polygon or the surface of a convex polyhedron. Using our methodology, several new CFA patterns are found, which outperform the currently commercialized and published ones. Experiments demonstrate the effectiveness of our CFA design methodology and the superiority of our new CFA patterns.
Optimization design of thumbspica splint using finite element method.
Huang, Tz-How; Feng, Chi-Kung; Gung, Yih-Wen; Tsai, Mei-Wun; Chen, Chen-Sheng; Liu, Chien-Lin
2006-12-01
De Quervain's tenosynovitis is often observed on repetitive flexion of the thumb. In the clinical setting, the conservative treatment is usually an applied thumbspica splint to immobilize the thumb. However, the traditional thumbspica splint is bulky and heavy. Thus, this study used the finite element (FE) method to remove redundant material in order to reduce the splint's weight and increase ventilation. An FE model of a thumbspica splint was constructed using ANSYS9.0 software. A maximum lateral thumb pinch force of 98 N was used as the input loading condition for the FE model. This study implemented topology optimization and design optimization to seek the optimal thickness and shape of the splint. This new design was manufactured and compared with the traditional thumbspica splint. Ten thumbspica splints were tested in a materials testing system, and statistically analyzed using an independent t test. The optimal thickness of the thumbspica splint was 3.2 mm. The new design is not significantly different from the traditional splint in the immobilization effect. However, the volume of this new design has been reduced by about 35%. This study produced a new thumbspica splint shape with less volume, but had a similar immobilization effect compared to the traditional shape. In a clinical setting, this result can be used by the occupational therapist as a reference for manufacturing lighter thumbspica splints for patients with de Quervain's tenosynovitis.
Optimal experimental design with the sigma point method.
Schenkendorf, R; Kremling, A; Mangold, M
2009-01-01
Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.
RFQ Designs and Beam-Loss Distributions for IFMIF
Jameson, Robert A
2007-01-01
The IFMIF 125 mA cw 40 MeV accelerators will set an intensity record. Minimization of particle loss along the accelerator is a top-level requirement and requires sophisticated design intimately relating the accelerated beam and the accelerator structure. Such design technique, based on the space-charge physics of linear accelerators (linacs), is used in this report in the development of conceptual designs for the Radio-Frequency-Quadrupole (RFQ) section of the IFMIF accelerators. Design comparisons are given for the IFMIF CDR Equipartitioned RFQ, a CDR Alternative RFQ, and new IFMIF Post-CDR Equipartitioned RFQ designs. Design strategies are illustrated for combining several desirable characteristics, prioritized as minimum beam loss at energies above ~ 1 MeV, low rf power, low peak field, short length, high percentage of accelerated particles. The CDR design has ~0.073% losses above 1 MeV, requires ~1.1 MW rf structure power, has KP factor 1.7,is 12.3 m long, and accelerates ~89.6% of the input beam. A new Post-CDR design has ~0.077% losses above 1 MeV, requires ~1.1 MW rf structure power, has KP factor 1.7 and ~8 m length, and accelerates ~97% of the input beam. A complete background for the designs is given, and comparisons are made. Beam-loss distributions are used as input for nuclear physics simulations of radioactivity effects in the IFMIF accelerator hall, to give information for shielding, radiation safety and maintenance design. Beam-loss distributions resulting from a ~1M particle input distribution representative of the IFMIF ECR ion source are presented. The simulations reported were performed with a consistent family of codes. Relevant comparison with other codes has not been possible as their source code is not available. Certain differences have been noted but are not consistent over a broad range of designs and parameter range. The exact transmission found by any of these codes should be treated as indicative, as each has various sensitivities in
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
A new method for designing shock-free transonic configurations
NASA Technical Reports Server (NTRS)
Sobieczky, H.; Fung, K. Y.; Seebass, A. R.; Yu, N. J.
1978-01-01
A method for the design of shock free supercritical airfoils, wings, and three dimensional configurations is described. Results illustrating the procedure in two and three dimensions are given. They include modifications to part of the upper surface of an NACA 64A410 airfoil that will maintain shock free flow over a range of Mach numbers for a fixed lift coefficient, and the modifications required on part of the upper surface of a swept wing with an NACA 64A410 root section to achieve shock free flow. While the results are given for inviscid flow, the same procedures can be employed iteratively with a boundary layer calculation in order to achieve shock free viscous designs. With a shock free pressure field the boundary layer calculation will be reliable and not complicated by the difficulties of shock wave boundary layer interaction.
A Method for Designing CDO Conformed to Investment Parameters
NASA Astrophysics Data System (ADS)
Nakae, Tatsuya; Moritsu, Toshiyuki; Komoda, Norihisa
We propose a method for designing CDO (Collateralized Debt Obligation) that meets investor needs about attributes of CDO. It is demonstrated that adjusting attributes (that are credit capability and issue amount) of CDO to investors' preferences causes a capital loss risk that the agent takes. We formulate a CDO optimization problem by defining an objective function using the above risk and by setting constraints that arise from investor needs and a risk premium that is paid for the agent. Our prototype experiment, in which fictitious underlying obligations and investor needs are given, verifies that CDOs can be designed without opportunity loss and dead stock loss, and that the capital loss is not more than thousandth part of the amount of annual payment under guarantee for small and midium-sized enterprises by a general credit guarantee institution.
A Generic Method for Design of Oligomer-Specific Antibodies
Brännström, Kristoffer; Lindhagen-Persson, Malin; Gharibyan, Anna L.; Iakovleva, Irina; Vestling, Monika; Sellin, Mikael E.; Brännström, Thomas; Morozova-Roche, Ludmilla; Forsgren, Lars; Olofsson, Anders
2014-01-01
Antibodies that preferentially and specifically target pathological oligomeric protein and peptide assemblies, as opposed to their monomeric and amyloid counterparts, provide therapeutic and diagnostic opportunities for protein misfolding diseases. Unfortunately, the molecular properties associated with oligomer-specific antibodies are not well understood, and this limits targeted design and development. We present here a generic method that enables the design and optimisation of oligomer-specific antibodies. The method takes a two-step approach where discrimination between oligomers and fibrils is first accomplished through identification of cryptic epitopes exclusively buried within the structure of the fibrillar form. The second step discriminates between monomers and oligomers based on differences in avidity. We show here that a simple divalent mode of interaction, as within e.g. the IgG isotype, can increase the binding strength of the antibody up to 1500 times compared to its monovalent counterpart. We expose how the ability to bind oligomers is affected by the monovalent affinity and the turnover rate of the binding and, importantly, also how oligomer specificity is only valid within a specific concentration range. We provide an example of the method by creating and characterising a spectrum of different monoclonal antibodies against both the Aβ peptide and α-synuclein that are associated with Alzheimer's and Parkinson's diseases, respectively. The approach is however generic, does not require identification of oligomer-specific architectures, and is, in essence, applicable to all polypeptides that form oligomeric and fibrillar assemblies. PMID:24618582
Design of braided composite tubes by numerical analysis method
Hamada, Hiroyuki; Fujita, Akihiro; Maekawa, Zenichiro; Nakai, Asami; Yokoyama, Atsushi
1995-11-01
Conventional composite laminates have very poor strength through thickness and as a result are limited in their application for structural parts with complex shape. In this paper, the design for braided composite tube was proposed. The concept of analysis model which involved from micro model to macro model was presented. This method was applied to predict bending rigidity and initial fracture stress under bending load of the braided tube. The proposed analytical procedure can be included as a unit in CAE system for braided composites.
Methods to Design and Synthesize Antibody-Drug Conjugates (ADCs)
Yao, Houzong; Jiang, Feng; Lu, Aiping; Zhang, Ge
2016-01-01
Antibody-drug conjugates (ADCs) have become a promising targeted therapy strategy that combines the specificity, favorable pharmacokinetics and biodistributions of antibodies with the destructive potential of highly potent drugs. One of the biggest challenges in the development of ADCs is the application of suitable linkers for conjugating drugs to antibodies. Recently, the design and synthesis of linkers are making great progress. In this review, we present the methods that are currently used to synthesize antibody-drug conjugates by using thiols, amines, alcohols, aldehydes and azides. PMID:26848651
Comparison of Optimal Design Methods in Inverse Problems
2011-05-11
corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each
A Method of Trajectory Design for Manned Asteroids Exploration
NASA Astrophysics Data System (ADS)
Gan, Q. B.; Zhang, Y.; Zhu, Z. F.; Han, W. H.; Dong, X.
2014-11-01
A trajectory optimization method of the nuclear propulsion manned asteroids exploration is presented. In the case of launching between 2035 and 2065, based on the Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory in the feasible regions is selected by pruning the flight sequences. Setting the nuclear propulsion flight plan as propel-coast-propel, and taking the minimal mass of aircraft departure as the index, the nuclear propulsion flight trajectory is separately optimized using a hybrid method. With the initial value of the optimized local parameters of each three phases, the global parameters are jointedly optimized. At last, the minimal departure mass trajectory design result is given.
A novel observer design method for neural mass models
NASA Astrophysics Data System (ADS)
Liu, Xian; Miao, Dong-Kai; Gao, Qing; Xu, Shi-Yun
2015-09-01
Neural mass models can simulate the generation of electroencephalography (EEG) signals with different rhythms, and therefore the observation of the states of these models plays a significant role in brain research. The structure of neural mass models is special in that they can be expressed as Lurie systems. The developed techniques in Lurie system theory are applicable to these models. We here provide a new observer design method for neural mass models by transforming these models and the corresponding error systems into nonlinear systems with Lurie form. The purpose is to establish appropriate conditions which ensure the convergence of the estimation error. The effectiveness of the proposed method is illustrated by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 61473245, 61004050, and 51207144).
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Performance enhancement of a pump impeller using optimal design method
NASA Astrophysics Data System (ADS)
Jeon, Seok-Yun; Kim, Chul-Kyu; Lee, Sang-Moon; Yoon, Joon-Yong; Jang, Choon-Man
2017-04-01
This paper presents the performance evaluation of a regenerative pump to increase its efficiency using optimal design method. Two design parameters which define the shape of the pump impeller, are introduced and analyzed. Pump performance is evaluated by numerical simulation and design of experiments(DOE). To analyze three-dimensional flow field in the pump, general analysis code, CFX, is used in the present work. Shear stress turbulence model is employed to estimate the eddy viscosity. Experimental apparatus with an open-loop facility is set up for measuring the pump performance. Pump performance, efficiency and pressure, obtained from numerical simulation are validated by comparison with the results of experiments. Throughout the shape optimization of the pump impeller at the operating flow condition, the pump efficiency is successfully increased by 3 percent compared to the reference pump. It is noted that the pressure increase of the optimum pump is mainly caused by higher momentum force generated inside blade passage due to the optimal blade shape. Comparisons of pump internal flow on the reference and optimum pump are also investigated and discussed in detail.
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Novel computational methods to design protein-protein interactions
NASA Astrophysics Data System (ADS)
Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne
2014-03-01
Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.
Design of Maternity Pillow by Using Kansei and Taguchi Methods
NASA Astrophysics Data System (ADS)
Ilma Rahmillah, Fety; Nanda kartika, Rachmah
2017-06-01
One of the customers’ considerations for purchasing a product is it can satisfy their feeling and emotion. It because of such product can enhance sleep quality of pregnant women. However, most of the existing product such as maternity pillows are still designed based on companies’ perspective. This study aims to capture the desire of pregnant women toward maternity pillow desired product by using kansei words and analyze the optimal design with Taguchi method. Eight collected kansei words were durable, aesthetic, comfort, portable, simple, multifunction, attractive motive, and easy to maintain. While L16 orthogonal array is used because there are three variables with two levels and four variables with four levels. It can be concluded that the best maternity pillow that can satisfy the customers can be designed by combining D1-E2-F2-G2-C1-B2-A2 means the model is U shape, flowery motive, medium color, Bag model B, cotton pillow cover, filled with silicon, and use double zipper. However, it is also possible to create combination of D1-E2-F2-G2-C1-B1-A1 by using consideration of cost which means that the zipper is switched to single as well as filled with dacron. In addition, the total percentage of contribution by using ANOVA reaches 95%.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... for Air Pollution Measurement Systems, Volume I,'' EPA/600/R-94/038a and ``Quality Assurance Handbook for Air Pollution Measurement Systems, Volume II, Ambient Air Quality Monitoring Program'' EPA-454/B... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New...
A New Aerodynamic Data Dispersion Method for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T.
2011-01-01
A novel method for implementing aerodynamic data dispersion analysis is herein introduced. A general mathematical approach combined with physical modeling tailored to the aerodynamic quantity of interest enables the generation of more realistically relevant dispersed data and, in turn, more reasonable flight simulation results. The method simultaneously allows for the aerodynamic quantities and their derivatives to be dispersed given a set of non-arbitrary constraints, which stresses the controls model in more ways than with the traditional bias up or down of the nominal data within the uncertainty bounds. The adoption and implementation of this new method within the NASA Ares I Crew Launch Vehicle Project has resulted in significant increases in predicted roll control authority, and lowered the induced risks for flight test operations. One direct impact on launch vehicles is a reduced size for auxiliary control systems, and the possibility of an increased payload. This technique has the potential of being applied to problems in multiple areas where nominal data together with uncertainties are used to produce simulations using Monte Carlo type random sampling methods. It is recommended that a tailored physics-based dispersion model be delivered with any aerodynamic product that includes nominal data and uncertainties, in order to make flight simulations more realistic and allow for leaner spacecraft designs.
Development of Analysis Methods for Designing with Composites
NASA Technical Reports Server (NTRS)
Madenci, E.
1999-01-01
The project involved the development of new analysis methods to achieve efficient design of composite structures. We developed a complex variational formulation to analyze the in-plane and bending coupling response of an unsymmetrically laminated plate with an elliptical cutout subjected to arbitrary edge loading as shown in Figure 1. This formulation utilizes four independent complex potentials that satisfy the coupled in-plane and bending equilibrium equations, thus eliminating the area integrals from the strain energy expression. The solution to a finite geometry laminate under arbitrary loading is obtained by minimizing the total potential energy function and solving for the unknown coefficients of the complex potentials. The validity of this approach is demonstrated by comparison with finite element analysis predictions for a laminate with an inclined elliptical cutout under bi-axial loading.The geometry and loading of this laminate with a lay-up of [-45/45] are shown in Figure 2. The deformed configuration shown in Figure 3 reflects the presence of bending-stretching coupling. The validity of the present method is established by comparing the out-of-plane deflections along the boundary of the elliptical cutout from the present approach with those of the finite element method. The comparison shown in Figure 4 indicates remarkable agreement. The details of this method are described in a manuscript by Madenci et al. (1998).
Ceramic bracket design: an analysis using the finite element method.
Ghosh, J; Nanda, R S; Duncanson, M G; Currier, G F
1995-12-01
This investigation was designed to generate finite element models for selected ceramic brackets and graphically display the stress distribution in the brackets when subjected to arch wire torsion and tipping forces. Six commercially available ceramic brackets, one monocrystalline and five polycrystalline alumina, of twin bracket design for the permanent maxillary left central incisor were studied. Three-dimensional computer models of the brackets were constructed and loading forces, similar to those applied by a full-size (0.0215 x 0.028 inch) stainless steel arch wire in torsion and tipping necessary to fracture ceramic brackets, were applied to the models. Stress levels were recorded at relevant points common among the various brackets. High stress levels were observed at areas of abrupt change in geometry and shape. The design of the wire slot and wings for the Contour bracket (Class One Orthodontic Products, Lubbock, Texas) and of the outer edges of the wire slot for the Allure bracket (GAC, Central Islip, N.Y.) were found to be good in terms of even stress distribution. The brackets with an isthmus connecting the wings seemed to resist stresses better than the one bracket that did not have this feature. The design of the isthmus for the Transcend (Unitek/3M, Monrovia, Calif.) and Lumina (Ormco, Glendora, Calif.) brackets were found to be acceptable as well. The Starfire bracket ("A" Company, San Diego, Calif.) showed high stresses and irregular stress distribution, because it had sharp angles, no rounded corners, and no isthmus. The finite element method proved to be a useful tool in the stress analysis of ceramic orthodontic brackets subjected to various forces.(ABSTRACT TRUNCATED AT 250 WORDS)
An analytical filter design method for guided wave phased arrays
NASA Astrophysics Data System (ADS)
Kwon, Hyu-Sang; Kim, Jin-Yeon
2016-12-01
This paper presents an analytical method for designing a spatial filter that processes the data from an array of two-dimensional guided wave transducers. An inverse problem is defined where the spatial filter coefficients are determined in such a way that a prescribed beam shape, i.e., a desired array output is best approximated in the least-squares sense. Taking advantage of the 2π-periodicity of the generated wave field, Fourier-series representation is used to derive closed-form expressions for the constituting matrix elements. Special cases in which the desired array output is an ideal delta function and a gate function are considered in a more explicit way. Numerical simulations are performed to examine the performance of the filters designed by the proposed method. It is shown that the proposed filters can significantly improve the beam quality in general. Most notable is that the proposed method does not compromise between the main lobe width and the sidelobe levels; i.e. a narrow main lobe and low sidelobes are simultaneously achieved. It is also shown that the proposed filter can compensate the effects of nonuniform directivity and sensitivity of array elements by explicitly taking these into account in the formulation. From an example of detecting two separate targets, how much the angular resolution can be improved as compared to the conventional delay-and-sum filter is quantitatively illustrated. Lamb wave based imaging of localized defects in an elastic plate using a circular array is also presented as an example of practical applications.
Formal methods in the design of Ada 1995
NASA Technical Reports Server (NTRS)
Guaspari, David
1995-01-01
Formal, mathematical methods are most useful when applied early in the design and implementation of a software system--that, at least, is the familiar refrain. I will report on a modest effort to apply formal methods at the earliest possible stage, namely, in the design of the Ada 95 programming language itself. This talk is an 'experience report' that provides brief case studies illustrating the kinds of problems we worked on, how we approached them, and the extent (if any) to which the results proved useful. It also derives some lessons and suggestions for those undertaking future projects of this kind. Ada 95 is the first revision of the standard for the Ada programming language. The revision began in 1988, when the Ada Joint Programming Office first asked the Ada Board to recommend a plan for revising the Ada standard. The first step in the revision was to solicit criticisms of Ada 83. A set of requirements for the new language standard, based on those criticisms, was published in 1990. A small design team, the Mapping Revision Team (MRT), became exclusively responsible for revising the language standard to satisfy those requirements. The MRT, from Intermetrics, is led by S. Tucker Taft. The work of the MRT was regularly subject to independent review and criticism by a committee of distinguished Reviewers and by several advisory teams--for example, the two User/Implementor teams, each consisting of an industrial user (attempting to make significant use of the new language on a realistic application) and a compiler vendor (undertaking, experimentally, to modify its current implementation in order to provide the necessary new features). One novel decision established the Language Precision Team (LPT), which investigated language proposals from a mathematical point of view. The LPT applied formal mathematical analysis to help improve the design of Ada 95 (e.g., by clarifying the language proposals) and to help promote its acceptance (e.g., by identifying a
Learning physics: A comparative analysis between instructional design methods
NASA Astrophysics Data System (ADS)
Mathew, Easow
The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p < .01) in academic performance between students who participated in traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in
PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD
NASA Astrophysics Data System (ADS)
Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao
Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.
Basic research on design analysis methods for rotorcraft vibrations
NASA Astrophysics Data System (ADS)
Hanagud, S.
1991-12-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Designing arrays for modern high-resolution methods
Dowla, F.U.
1987-10-01
A bearing estimation study of seismic wavefields propagating from a strongly heterogeneous media shows that with the high-resolution MUSIC algorithm the bias of the direction estimate can be reduced by adopting a smaller aperture sub-array. Further, on this sub-array, the bias of the MUSIC algorithm is less than those of the MLM and Bartlett methods. On the full array, the performance for the three different methods are comparable. Improvement in bearing estimation in MUSIC with a reduced aperture might be attributed to increased signal coherency in the array. For methods with less resolution, the improved signal coherency in the smaller array is possible being offset by severe loss of resolution and the presence of weak secondary sources. Building upon the characteristics of real seismic wavefields, a design language has been developed to generate, modify, and test other arrays. Eigenstructures of wavefields and arrays have been studied empirically by simulation of a variety of realistic signals. 6 refs., 5 figs.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2016-08-23
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2015-08-18
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Method for computationally efficient design of dielectric laser accelerator structures.
Hughes, Tyler; Veronis, Georgios; Wootton, Kent P; Joel England, R; Fan, Shanhui
2017-06-26
Dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of only two full-field electromagnetic simulations, the original and 'adjoint'. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D
2014-03-04
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Development of impact design methods for ceramic gas turbine components
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1990-01-01
Impact damage prediction methods are being developed to aid in the design of ceramic gas turbine engine components with improved impact resistance. Two impact damage modes were characterized: local, near the impact site, and structural, usually fast fracture away from the impact site. Local damage to Si3N4 impacted by Si3N4 spherical projectiles consists of ring and/or radial cracks around the impact point. In a mechanistic model being developed, impact damage is characterized as microcrack nucleation and propagation. The extent of damage is measured as volume fraction of microcracks. Model capability is demonstrated by simulating late impact tests. Structural failure is caused by tensile stress during impact exceeding material strength. The EPIC3 code was successfully used to predict blade structural failures in different size particle impacts on radial and axial blades.
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
Unique Method for Generating Design Earthquake Time Histories
R. E. Spears
2008-07-01
A method has been developed which takes a seed earthquake time history and modifies it to produce given design response spectra. It is a multi-step process with an initial scaling step and then multiple refinement steps. It is unique in the fact that both the acceleration and displacement response spectra are considered when performing the fit (which primarily improves the low frequency acceleration response spectrum accuracy). Additionally, no matrix inversion is needed. The features include encouraging the code acceleration, velocity, and displacement ratios and attempting to fit the pseudo velocity response spectrum. Also, “smoothing” is done to transition the modified time history to the seed time history at its start and end. This is done in the time history regions below a cumulative energy of 5% and above a cumulative energy of 95%. Finally, the modified acceleration, velocity, and displacement time histories are adjusted to start and end with an amplitude of zero (using Fourier transform techniques for integration).
A Design Method for FES Bone Health Therapy in SCI
Andrews, Brian; Shippen, James; Armengol, Monica; Gibbons, Robin; Holderbaum, William; Harwin, William
2016-01-01
FES assisted activities such as standing, walking, cycling and rowing induce forces within the leg bones and have been proposed to reduce osteoporosis in spinal cord injury (SCI). However, details of the applied mechanical stimulus for osteogenesis is often not reported. Typically, comparisons of bone density results are made after costly and time consuming clinical trials. These studies have produced inconsistent results and are subject to sample size variations. Here we propose a design process that may be used to predict the clinical outcome based on biomechanical simulation and mechano-biology. This method may allow candidate therapies to be optimized and quantitatively compared. To illustrate the approach we have used data obtained from a rower with complete paraplegia using the RowStim (III) system. PMID:28078075
Design of composite laminates by a Monte Carlo method
NASA Astrophysics Data System (ADS)
Fang, Chin; Springer, George S.
1993-01-01
A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.
Design method of water jet pump towards high cavitation performances
NASA Astrophysics Data System (ADS)
Cao, L. L.; Che, B. X.; Hu, L. J.; Wu, D. Z.
2016-05-01
As one of the crucial components for power supply, the propulsion system is of great significance to the advance speed, noise performances, stabilities and other associated critical performances of underwater vehicles. Developing towards much higher advance speed, the underwater vehicles make more critical demands on the performances of the propulsion system. Basically, the increased advance speed requires the significantly raised rotation speed of the propulsion system, which would result in the deteriorated cavitation performances and consequently limit the thrust and efficiency of the whole system. Compared with the traditional propeller, the water jet pump offers more favourite cavitation, propulsion efficiency and other associated performances. The present research focuses on the cavitation performances of the waterjet pump blade profile in expectation of enlarging its advantages in high-speed vehicle propulsion. Based on the specifications of a certain underwater vehicle, the design method of the waterjet blade with high cavitation performances was investigated in terms of numerical simulation.
On the feasibility of a transient dynamic design analysis method
NASA Astrophysics Data System (ADS)
Ohara, George J.; Cunniff, Patrick F.
1992-04-01
This Annual Report summarizes the progress that was made during the first year of the two-year grant from the Office of Naval Research. The dynamic behavior of structures subjected to mechanical shock loading provides a continuing problem for design engineers concerned with shipboard foundations supporting critical equipment. There are two particular problems associated with shock response that are currently under investigation. The first topic explores the possibilities of developing a transient design analysis method that does not degrade the current level of the Navy's shock-proofness requirements for heavy shipboard equipment. The second topic examines the prospects of developing scaling rules for the shock response of simple internal equipment of submarines subjected to various attack situations. This effort has been divided into two tasks: chemical explosive scaling for a given hull; and scaling of equipment response across different hull sizes. The computer is used as a surrogate shock machine for these studies. Hence, the results of the research can provide trends, ideas, suggestions, and scaling rules to the Navy. In using these results, the shock-hardening program should use measured data rather than calculated data.
The Aging, Demographics, and Memory Study: study design and methods.
Langa, Kenneth M; Plassman, Brenda L; Wallace, Robert B; Herzog, A Regula; Heeringa, Steven G; Ofstedal, Mary Beth; Burke, James R; Fisher, Gwenith G; Fultz, Nancy H; Hurd, Michael D; Potter, Guy G; Rodgers, Willard L; Steffens, David C; Weir, David R; Willis, Robert J
2005-01-01
We describe the design and methods of the Aging, Demographics, and Memory Study (ADAMS), a new national study that will provide data on the antecedents, prevalence, outcomes, and costs of dementia and "cognitive impairment, not demented" (CIND) using a unique study design based on the nationally representative Health and Retirement Study (HRS). We also illustrate potential uses of the ADAMS data and provide information to interested researchers on obtaining ADAMS and HRS data. The ADAMS is the first population-based study of dementia in the United States to include subjects from all regions of the country, while at the same time using a single standardized diagnostic protocol in a community-based sample. A sample of 856 individuals age 70 or older who were participants in the ongoing HRS received an extensive in-home clinical and neuropsychological assessment to determine a diagnosis of normal, CIND, or dementia. Within the CIND and dementia categories, subcategories (e.g. Alzheimer's disease, vascular dementia) were assigned to denote the etiology of cognitive impairment. Linking the ADAMS dementia clinical assessment data to the wealth of available longitudinal HRS data on health, health care utilization, informal care, and economic resources and behavior, will provide a unique opportunity to study the onset of CIND and dementia in a nationally representative population-based sample, as well as the risk factors, prevalence, outcomes, and costs of CIND and dementia. Copyright (c) 2005 S. Karger AG, Basel.
Inflammation and Exercise (INFLAME): study rationale, design, and methods
Thompson, Angela; Mikus, Catherine; Rodarte, Ruben Q.; Distefano, Brandy; Priest, Elisa L.; Sinclair, Erin; Earnest, Conrad P.; Blair, Steven N.; Church, Timothy S.
2008-01-01
Purpose The INFLAME study is designed to determine the effect of exercise training on elevated high-sensitivity C-Reactive Protein (CRP) concentrations in initially sedentary women and men. Methods INFLAME will recruit 170 healthy, sedentary women and men with elevated CRP (≥2.0 mg/L) to be randomized to either an exercise group or non-exercise control group. Exercising individuals will participate in four months of supervised aerobic exercise with a total energy expenditure of 16 kcal • kg−1 • week−1 (KKW). Exercise intensity will be 60–80% of maximal oxygen consumption (VO2 max). Outcome The primary outcome will be change in plasma CRP concentration. Secondary outcomes include visceral adiposity, the cytokines IL-6 and TNF-α, and heart rate variability (HRV) in order to examine potential biological mechanisms whereby exercise might affect CRP concentrations. Summary INFLAME will help us understand the effects of moderate to vigorous exercise on CRP concentrations in sedentary individuals. To our knowledge this will be the largest training study specifically designed to examine the effect of exercise on CRP concentrations. This study has the potential to influence therapeutic applications since CRP measurement is becoming an important clinical measurement in Coronary Heart Disease risk assessment. This study will also contribute to the limited body of literature examining the effect of exercise on the variables of visceral adiposity, cytokines, and heart rate variability. PMID:18024231
TIR collimator designs based on point source and extended source methods
NASA Astrophysics Data System (ADS)
Talpur, T.; Herkommer, A.
2015-09-01
TIR collimator are essential illumination components demanding high efficiency, accuracy, and uniformity. Various illumination design methods have been developed for different design domains, including tailoring method, design via optimization, mapping and feedback method, and the simultaneous multiple surface (SMS) method. This paper summarizes and compares the performance of these methods along with the advantages and the limitations.
The Method of Complex Characteristics for Design of Transonic Compressors.
NASA Astrophysics Data System (ADS)
Bledsoe, Margaret Randolph
We calculate shockless transonic flows past two -dimensional cascades of airfoils characterized by a prescribed speed distribution. The approach is to find solutions of the partial differential equation (c('2)-u('2)) (PHI)(,xx) - 2uv (PHI)(,xy) + (c('2)-v('2)) (PHI)(,yy) = 0 by the method of complex characteristics. Here (PHI) is the velocity potential, so (DEL)(PHI) = (u,v), and c is the local speed of sound. Our method consists in noting that the coefficients of the equation are analytic, so that we can use analytic continuation, conformal mapping, and a spectral method in the hodograph plane to determine the flow. After complex extension we obtain canonical equations for (PHI) and for the stream function (psi) as well as an explicit map from the hodograph plane to complex characteristic coordinates. In the subsonic case, a new coordinate system is defined in which the flow region corresponds to the interior of an ellipse. We construct special solutions of the flow equations in these coordinates by solving characteristic initial value problems in the ellipse with initial data defined by the complete system of Chebyshev polynomials. The condition (psi) = 0 on the boundary of the ellipse is used to determine the series representation of (PHI) and (psi). The map from the ellipse to the complex flow coordinates is found from data specifying the speed q as a function of the arc length s. The transonic problem for shockless flow becomes well posed after appropriate modifications of this procedure. The nonlinearity of the problem is handled by an iterative method that determines the boundary value problem in the ellipse and the map function in sequence. We have implemented this method as a computer code to design two-dimensional cascades of shockless compressor airfoils with gap-to-chord ratios as low as .5 and supersonic zones on both the upper and lower surfaces. The method may be extended to solve more general boundary value problems for second order partial
A practical method for analyzing factorial designs with heteroscedastic data.
Vallejo, Guiillermo; Ato, Manuel; Fernández, M Paula; Livacic-Rojas, Pablo E
2008-06-01
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.
Rationale, design and methods of the CASHMERE study.
Simon, Tabassome; Boutouyrie, Pierre; Gompel, Anne; Christin-Maitre, Sophie; Laurent, Stéphane; Thuillez, Christian; Zannad, Faiez; Bernaud, Corine; Jaillon, Patrice
2004-02-01
Carotid intima-media thickness (IMT) measurement is a noninvasive method used for quantification of early stage of atherosclerosis. Data suggest that the combination of statin and hormone replacement therapy (HRT) might be useful in reducing the early progression of atherosclerosis in postmenopausal women. The main aim of the study is to compare the effects of 12-month therapy with atorvastatin (80 mg/day), HRT (oral 17beta-estradiol 1 or 2 mg/day, plus cyclic dydrogesterone 10 mg) alone and their combination vs. placebo on the progression of carotid IMT by using a high-definition echotracking device. The secondary objectives are to assess the effects of the treatments vs. placebo on arterial stiffness, lipid profile and C-reactive protein. The CASHMERE trial is an European randomized study with a 2 x 2-factorial design, double blinded for atorvastatin and prospective randomized, open blinded endpoint evaluation (PROBE) method applied to HRT. The investigators can adjust the dose of estradiol at any time during follow-up if necessary. A total of 800 postmenopausal women with mild hypercholesterolemia and with no previous history of cardiovascular disease will be included and followed up by their physicians [general practitioners (GPs) or gynecologists] for 1 year. The CASHMERE trial is the first randomized clinical trial to examine the effects of a statin alone or combined with HRT on the structure and the function of carotid artery as early markers of atherosclerosis in postmenopausal women with mild hypercholesterolemia. The results are expected for 2007.
Design optimization methods for genomic DNA tiling arrays
Bertone, Paul; Trifonov, Valery; Rozowsky, Joel S.; Schubert, Falk; Emanuelsson, Olof; Karro, John; Kao, Ming-Yang; Snyder, Michael; Gerstein, Mark
2006-01-01
A recent development in microarray research entails the unbiased coverage, or tiling, of genomic DNA for the large-scale identification of transcribed sequences and regulatory elements. A central issue in designing tiling arrays is that of arriving at a single-copy tile path, as significant sequence cross-hybridization can result from the presence of non-unique probes on the array. Due to the fragmentation of genomic DNA caused by the widespread distribution of repetitive elements, the problem of obtaining adequate sequence coverage increases with the sizes of subsequence tiles that are to be included in the design. This becomes increasingly problematic when considering complex eukaryotic genomes that contain many thousands of interspersed repeats. The general problem of sequence tiling can be framed as finding an optimal partitioning of non-repetitive subsequences over a prescribed range of tile sizes, on a DNA sequence comprising repetitive and non-repetitive regions. Exact solutions to the tiling problem become computationally infeasible when applied to large genomes, but successive optimizations are developed that allow their practical implementation. These include an efficient method for determining the degree of similarity of many oligonucleotide sequences over large genomes, and two algorithms for finding an optimal tile path composed of longer sequence tiles. The first algorithm, a dynamic programming approach, finds an optimal tiling in linear time and space; the second applies a heuristic search to reduce the space complexity to a constant requirement. A Web resource has also been developed, accessible at http://tiling.gersteinlab.org, to generate optimal tile paths from user-provided DNA sequences. PMID:16365382
Shimada, Masato; Suzuki, Wataru; Yamada, Shuho; Inoue, Masato
2016-01-01
To achieve a Universal Design, designers must consider diverse users' physical and functional requirements for their products. However, satisfying these requirements and obtaining the information which is necessary for designing a universal product is very difficult. Therefore, we propose a new design method based on the concept of set-based design to solve these issues. This paper discusses the suitability of proposed design method by applying bicycle frame design problem.
Ollikainen, Noah; Smith, Colin A.; Fraser, James S.; Kortemme, Tanja
2013-01-01
Sampling alternative conformations is key to understanding how proteins work and engineering them for new functions. However, accurately characterizing and modeling protein conformational ensembles remains experimentally and computationally challenging. These challenges must be met before protein conformational heterogeneity can be exploited in protein engineering and design. Here, as a stepping stone, we describe methods to detect alternative conformations in proteins and strategies to model these near-native conformational changes based on backrub-type Monte Carlo moves in Rosetta. We illustrate how Rosetta simulations that apply backrub moves improve modeling of point mutant side chain conformations, native side chain conformational heterogeneity, functional conformational changes, tolerated sequence space, protein interaction specificity, and amino acid co-variation across protein-protein interfaces. We include relevant Rosetta command lines and RosettaScripts to encourage the application of these types of simulations to other systems. Our work highlights that critical scoring and sampling improvements will be necessary to approximate conformational landscapes. Challenges for the future development of these methods include modeling conformational changes that propagate away from designed mutation sites and modulating backbone flexibility to predictively design functionally important conformational heterogeneity. PMID:23422426
Visual Narrative Research Methods as Performance in Industrial Design Education
ERIC Educational Resources Information Center
Campbell, Laurel H.; McDonagh, Deana
2009-01-01
This article discusses teaching empathic research methodology as performance. The authors describe their collaboration in an activity to help undergraduate industrial design students learn empathy for others when designing products for use by diverse or underrepresented people. The authors propose that an industrial design curriculum would benefit…
Visual Narrative Research Methods as Performance in Industrial Design Education
ERIC Educational Resources Information Center
Campbell, Laurel H.; McDonagh, Deana
2009-01-01
This article discusses teaching empathic research methodology as performance. The authors describe their collaboration in an activity to help undergraduate industrial design students learn empathy for others when designing products for use by diverse or underrepresented people. The authors propose that an industrial design curriculum would benefit…
Pseudo-Sibship Methods in the Case-Parents Design
Yu, Zhaoxia; Deng, Li
2013-01-01
Recent evidence suggests that complex traits are likely determined by multiple loci, with each of which contributes a weak to moderate individual effect. Although extensive literature exists on multi-locus analysis of unrelated subjects, there are relatively fewer strategies for jointly analyzing multiple loci using family data. Here we address this issue by evaluating two pseudo-sibship methods: the 1:1 matching, which matches each affected offspring to the pseudo sibling formed by the alleles not transmitted to the affected offspring; the exhaustive matching, which matches each affected offspring to the pseudo siblings formed by all the other possible combinations of parental alleles. We prove that the two matching strategies use exactly and approximately the same amount of information from data under additive and multiplicative genetic models, respectively. Using numerical calculations under a variety of models and testing assumptions, we show that compared to the exhaustive matching, the 1:1 matching has comparable asymptotic power in detecting multiplicative / additive effects in single-locus analysis and main effects in multi-locus analysis, and it allows association testing of multiple linked loci. These results pave the way for many existing multi-locus analysis methods developed for the case-control (or matched case-control) design to be applied to case-parents data with minor modifications. As an example, with the 1:1 matching, we applied an L1 regularized regression to a Crohn’s disease dataset. Using the multiple loci selected by our approach, we obtained an order-of-magnitude decrease in p-value and an 18.9% increase in prediction accuracy when comparing to using the most significant individual locus. PMID:21953439
ERIC Educational Resources Information Center
Honebein, Peter C.
2017-01-01
An instructional designer's values about instructional methods can be a curse or a cure. On one hand, a designer's love affair for a method may cause them to use that method in situations that are not appropriate. On the other hand, that same love affair may inspire a designer to fight for a method when those in power are willing to settle for a…
A New Approach to Comparing Several Equating Methods in the Context of the NEAT Design
ERIC Educational Resources Information Center
Sinharay, Sandip; Holland, Paul W.
2010-01-01
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three equating methods that can be used with a NEAT design are the frequency estimation equipercentile equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. We suggest an…
Organization method for urban planning design data based on GIS
NASA Astrophysics Data System (ADS)
Gao, Huijun; Guo, Dazhi; Zhang, Hao; Tian, Ran
2006-10-01
GIS and CAD are two different areas of computer technology and its applications, each is widely used in city planning, design and administration. Taking the data management of city planning in Ningbo, Zhejiang province as an example, and has loaded the basic data into database within an urban planning management information system, this paper puts forward a GIS-based organization solution and format standard of planning & design data, which follows designer's usual practice and requirements for design data prescribed by urban planning administration. It works perfectly in Ningbo Urban Planning and Design management system.
Method for computationally efficient design of dielectric laser accelerator structures
Hughes, Tyler; Veronis, Georgios; Wootton, Kent P.; ...
2017-06-22
Here, dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of onlymore » two full-field electromagnetic simulations, the original and ‘adjoint’. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.« less
The design method of a dam on gravel stream
Ni, W.B.; Wu, S.J.; Huang, C.Y.
1995-12-31
Due to the intense requirements of electricity and water supply in the past decades, large number of dams, reservoirs and mobile barrages have been completed in Taiwan. These hydraulic structures almost occupied all the sound rock foundations with little overburdens. This indicates that the future ones have to face the situation of high overburdens. Special considerations should be taken to overcome the difficulties of water tight requirement and stability of structures. A case study is presented in this paper. It is a dam built for the purpose of hydropower generation and water supply, and is constructed on a gravel stream with 40 m of overburdens. Design method of this dam is discussed in this paper. Curtain grouting is performed in this dam to reduce the high permeability of gravel to an acceptable level. Caissons are chosen to be the structural foundations in this case study to support heavy loads of the dam and to reduce the difficulty of curtain grouting. Another problem for a dam built on gravel stream is the damage of abrasion and erosion to the stilling basin slabs, the sluice way aprons and the spillway aprons. Discussions on the abrasion-erosion resistant materials are also given in this paper.
An entropy method for floodplain monitoring network design
NASA Astrophysics Data System (ADS)
Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, Paul D.
2012-09-01
In recent years an increasing number of flood-related fatalities has highlighted the necessity of improving flood risk management to reduce human and economic losses. In this framework, monitoring of flood-prone areas is a key factor for building a resilient environment. In this paper a method for designing a floodplain monitoring network is presented. A redundant network of cheap wireless sensors (GridStix) measuring water depth is considered over a reach of the River Dee (UK), with sensors placed both in the channel and in the floodplain. Through a Three Objective Optimization Problem (TOOP) the best layouts of sensors are evaluated, minimizing their redundancy, maximizing their joint information content and maximizing the accuracy of the observations. A simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) that is used for hydraulic model building is the globally and freely available SRTM DEM.
Design and methods of the national Vietnam veterans longitudinal study.
Schlenger, William E; Corry, Nida H; Kulka, Richard A; Williams, Christianna S; Henn-Haase, Clare; Marmar, Charles R
2015-09-01
The National Vietnam Veterans Longitudinal Study (NVVLS) is the second assessment of a representative cohort of US veterans who served during the Vietnam War era, either in Vietnam or elsewhere. The cohort was initially surveyed in the National Vietnam Veterans Readjustment Study (NVVRS) from 1984 to 1988 to assess the prevalence, incidence, and effects of post-traumatic stress disorder (PTSD) and other post-war problems. The NVVLS sought to re-interview the cohort to assess the long-term course of PTSD. NVVLS data collection began July 3, 2012 and ended May 17, 2013, comprising three components: a mailed health questionnaire, a telephone health survey interview, and, for a probability sample of theater Veterans, a clinical diagnostic telephone interview administered by licensed psychologists. Excluding decedents, 78.8% completed the questionnaire and/or telephone survey, and 55.0% of selected living veterans participated in the clinical interview. This report provides a description of the NVVLS design and methods. Together, the NVVRS and NVVLS constitute a nationally representative longitudinal study of Vietnam veterans, and extend the NVVRS as a critical resource for scientific and policy analyses for Vietnam veterans, with policy relevance for Iraq and Afghanistan veterans.
A decision-based perspective for the design of methods for systems design
NASA Technical Reports Server (NTRS)
Mistree, Farrokh; Muster, Douglas; Shupe, Jon A.; Allen, Janet K.
1989-01-01
Organization of material, a definition of decision based design, a hierarchy of decision based design, the decision support problem technique, a conceptual model design that can be manufactured and maintained, meta-design, computer-based design, action learning, and the characteristics of decisions are among the topics covered.
Active cooling design for scramjet engines using optimization methods
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.
1988-01-01
A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure constraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.
Active cooling design for scramjet engines using optimization methods
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.
1988-01-01
A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure contraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.
Active cooling design for scramjet engines using optimization methods
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.
1988-01-01
A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure constraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.
Genetic algorithm-based design method for multilevel anisotropic diffraction gratings
NASA Astrophysics Data System (ADS)
Okamoto, Hiroyuki; Noda, Kohei; Sakamoto, Moritsugu; Sasaki, Tomoyuki; Wada, Yasuhiro; Kawatsuki, Nobuhiro; Ono, Hiroshi
2017-08-01
We developed a method for the design of multilevel anisotropic diffraction gratings based on a genetic algorithm. The method is used to design the multilevel anisotropic diffraction gratings based on input data that represent the output from the required grating. The validity of the proposed method was evaluated by designing a multilevel anisotropic diffraction grating using the outputs from an orthogonal circular polarization grating. The design results corresponded to the orthogonal circular polarization grating structures that were used to provide outputs to act as the input data for the process. Comparison with existing design methods shows that the proposed method can reduce the number of human processes that are required to design multilevel anisotropic diffraction gratings. Additionally, the method will be able to design complex structures without any requirement for subsequent examination by a human designer. The method can contribute to the development of optical elements by designing multilevel anisotropic diffraction gratings.
Overview: Applications of numerical optimization methods to helicopter design problems
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.
Investigating a Method of Scaffolding Student-Designed Experiments
NASA Astrophysics Data System (ADS)
Morgan, Kelly; Brooks, David W.
2012-08-01
The process of designing an experiment is a difficult one. Students often struggle to perform such tasks as the design process places a large cognitive load on students. Scaffolding is the process of providing support for a student to allow them to complete tasks they would otherwise not have been able to complete. This study sought to investigate backwards-design, one form of scaffolding the experimental design process for students. Students were guided through the design process in a backwards manner (designing the results section first and working backwards through typical report components to the materials and safety sections). The use of reflective prompts as possible scaffold for metacognitive processes was also studied. Scaffolding was in the form of a computer application built specifically for this purpose. Four versions of the computer application were randomly assigned to 102 high school chemistry students and students were asked to the design of an experiment, producing a report. The use of backwards-design scaffolding resulted in significantly higher performance on lab reports. The addition of reflective prompts reduced the effect of backwards-design scaffolding in lower-level students.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1998-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1998-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Sjøberg, C; Timpka, T
1995-01-01
This paper reports on a qualitative study using an argumentation-based design method (Argumentative Design) in the development of clinical software systems. The method, which requires visualization of the underlying design goals, the specific needs-for-change, and the probable consequences of the alternative design measures, caused previously implicit argument structures to be exposed and discussed. This uncovering of hidden agendas also revealed previously implicit coalitions and organizational influences on the design process. Implications for software development practices in medical informatics are discussed.
Stillbirth Collaborative Research Network: design, methods and recruitment experience.
Parker, Corette B; Hogue, Carol J R; Koch, Matthew A; Willinger, Marian; Reddy, Uma M; Thorsten, Vanessa R; Dudley, Donald J; Silver, Robert M; Coustan, Donald; Saade, George R; Conway, Deborah; Varner, Michael W; Stoll, Barbara; Pinar, Halit; Bukowski, Radek; Carpenter, Marshall; Goldenberg, Robert
2011-09-01
The Stillbirth Collaborative Research Network (SCRN) has conducted a multisite, population-based, case-control study, with prospective enrollment of stillbirths and livebirths at the time of delivery. This paper describes the general design, methods and recruitment experience. The SCRN attempted to enroll all stillbirths and a representative sample of livebirths occurring to residents of pre-defined geographical catchment areas delivering at 59 hospitals associated with five clinical sites. Livebirths <32 weeks gestation and women of African descent were oversampled. The recruitment hospitals were chosen to ensure access to at least 90% of all stillbirths and livebirths to residents of the catchment areas. Participants underwent a standardised protocol including maternal interview, medical record abstraction, placental pathology, biospecimen testing and, in stillbirths, post-mortem examination. Recruitment began in March 2006 and was completed in September 2008 with 663 women with a stillbirth and 1932 women with a livebirth enrolled, representing 69% and 63%, respectively, of the women identified. Additional surveillance for stillbirths continued until June 2009 and a follow-up of the case-control study participants was completed in December 2009. Among consenting women, there were high consent rates for the various study components. For the women with stillbirths, 95% agreed to a maternal interview, chart abstraction and a placental pathological examination; 91% of the women with a livebirth agreed to all of these components. Additionally, 84% of the women with stillbirths agreed to a fetal post-mortem examination. This comprehensive study is poised to systematically study a wide range of potential causes of, and risk factors for, stillbirths and to better understand the scope and incidence of the problem.
Stillbirth Collaborative Research Network: Design, Methods and Recruitment Experience
Parker, Corette B.; Hogue, Carol J. Rowland; Koch, Matthew A.; Willinger, Marian; Reddy, Uma; Thorsten, Vanessa R.; Dudley, Donald J.; Silver, Robert M.; Coustan, Donald; Saade, George R.; Conway, Deborah; Varner, Michael W.; Stoll, Barbara; Pinar, Halit; Bukowski, Radek; Carpenter, Marshall; Goldenberg, Robert
2013-01-01
SUMMARY The Stillbirth Collaborative Research Network (SCRN) has conducted a multisite, population-based, case-control study, with prospective enrollment of stillbirths and live births at the time of delivery. This paper describes the general design, methods, and recruitment experience. The SCRN attempted to enroll all stillbirths and a representative sample of live births occurring to residents of pre-defined geographic catchment areas delivering at 59 hospitals associated with five clinical sites. Live births <32 weeks gestation and women of African descent were oversampled. The recruitment hospitals were chosen to ensure access to at least 90% of all stillbirths and live births to residents of the catchment areas. Participants underwent a standardized protocol including maternal interview, medical record abstraction, placental pathology, biospecimen testing, and, in stillbirths, postmortem examination. Recruitment began in March 2006 and was completed in September 2008 with 663 women with a stillbirth and 1932 women with a live birth enrolled, representing 69% and 63%, respectively, of the women identified. Additional surveillance for stillbirth continued through June 2009 and a follow-up of the case-control study participants was completed in December 2009. Among consenting women, there were high consent rates for the various study components. For the women with stillbirth, 95% agreed to maternal interview, chart abstraction, and placental pathologic examination; 91% of the women with live birth agreed to all of these components. Additionally, 84% of the women with stillbirth agreed to a fetal postmortem examination. This comprehensive study is poised to systematically study a wide range of potential causes of, and risk factors for, stillbirth and to better understand the scope and incidence of the problem. PMID:21819424
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
METHODS FOR INTEGRATING ENVIRONMENTAL CONSIDERATIONS INTO CHEMICAL PROCESS DESIGN DECISIONS
The objective of this cooperative agreement was to postulate a means by which an engineer could routinely include environmental considerations in day-to-day conceptual design problems; a means that could easily integrate with existing design processes, and thus avoid massive retr...
Investigating a Method of Scaffolding Student-Designed Experiments
ERIC Educational Resources Information Center
Morgan, Kelly; Brooks, David W.
2012-01-01
The process of designing an experiment is a difficult one. Students often struggle to perform such tasks as the design process places a large cognitive load on students. Scaffolding is the process of providing support for a student to allow them to complete tasks they would otherwise not have been able to complete. This study sought to investigate…
Teaching Improvement Model Designed with DEA Method and Management Matrix
ERIC Educational Resources Information Center
Montoneri, Bernard
2014-01-01
This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…
Preliminary design of pseudo satellites: Basic methods and feasibility criteria
NASA Astrophysics Data System (ADS)
Klimenko, N. N.
2016-12-01
Analytical models of weight and energy balances, aerodynamic models, and solar irradiance models to perform pseudo-satellite preliminary design are presented. Feasibility criteria are determined in accordance with the aim of preliminary design dependent on mission scenario and type of payload.
METHODS FOR INTEGRATING ENVIRONMENTAL CONSIDERATIONS INTO CHEMICAL PROCESS DESIGN DECISIONS
The objective of this cooperative agreement was to postulate a means by which an engineer could routinely include environmental considerations in day-to-day conceptual design problems; a means that could easily integrate with existing design processes, and thus avoid massive retr...
Developing Baby Bag Design by Using Kansei Engineering Method
NASA Astrophysics Data System (ADS)
Janari, D.; Rakhmawati, A.
2016-01-01
Consumer's preferences and market demand are essential factors for product's success. Thus, in achieving its success, a product should have design that could fulfill consumer's expectation. Purpose of this research is accomplishing baby bag product as stipulated by Kansei. The results that represent Kanseiwords are; neat, unique, comfortable, safe, modern, gentle, elegant, antique, attractive, simple, spacious, creative, colorful, durable, stylish, smooth and strong. Identification value on significance of correlation for durable attribute is 0,000 < 0,005, which means significant to baby's bag. While the value of coefficient regression is 0,812 < 0,005, which means that durable attribute insignificant to baby's bag.The result of the baby's bag final design selectionbased on the questionnaire 3 is resulting the combination of all design. Space for clothes, diaper's space, shoulder grip, side grip, bottle's heater pocket and bottle's pocket are derived from design 1. Top grip, space for clothes, shoulder grip, and side grip are derived from design 2.Others design that were taken are, spaces for clothes from design 3, diaper's space and clothes’ space from design 4.
NASA Astrophysics Data System (ADS)
Sasaki, Ai-ichiro; Furuya, Akinori; Hirata, Akihiko; Morimura, Hiroki; Kodate, Junichi
2017-09-01
A systematic design method is considered for maximizing the sensitivity of electrooptic sensors used for electric-field detection. The design method can be reduced to a routine procedure that includes matrix manipulation and differentiation. By applying the design method, the maximum sensitivity is realized with fewer optical components than in conventional electrooptic sensing systems. Since the proposed method shows a wide generality, it can be applied to designing sensors including various optical crystals.
Development of Combinatorial Methods for Alloy Design and Optimization
Pharr, George M.; George, Easo P.; Santella, Michael L
2005-07-01
The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very powerful technique for
Parametric design of a Francis turbine runner by means of a three-dimensional inverse design method
NASA Astrophysics Data System (ADS)
Daneshkah, K.; Zangeneh, M.
2010-08-01
The present paper describes the parametric design of a Francis turbine runner. The runner geometry is parameterized by means of a 3D inverse design method, while CFD analyses were performed to assess the hydrodymanic and suction performance of different design configurations that were investigated. An initial runner design was first generated and used as baseline for parametric study. The effects of several design parameter, namely stacking condition and blade loading was then investigated in order to determine their effect on the suction performance. The use of blade parameterization using the inverse method lead to a major advantage for design of Francis turbine runners, as the three-dimensional blade shape is describe by parameters that closely related to the flow field namely blade loading and stacking condition that have a direct impact on the hydrodynamics of the flow field. On the basis of this study, an optimum configuration was designed which results in a cavitation free flow in the runner, while maintaining a high level of hydraulic efficiency. The paper highlights design guidelines for application of inverse design method to Francis turbine runners. The design guidelines have a general validity and can be used for similar design applications since they are based on flow field analyses and on hydrodynamic design parameters.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Application of optimization methods to helicopter rotor blade design
NASA Technical Reports Server (NTRS)
Chattopadhyay, A.; Walsh, J. L.
1990-01-01
A procedure for the minimum weight design of helicopter rotor blades with constraints on multiple coupled flap-lag natural frequencies, autorotational inertia, and centrifugal stress is presented. Optimum designs are obtained for blades with both rectangular and tapered planforms and are compared within a reference blade. The effects of higher-frequency constraints and stress constraints on the optimum blade designs are assessed. The results indicate that there is an increase in blade weight and a significant change in the design variable distributions with an increase in the number of frequency constraints. The inclusion of stress constraints has different effects on the wall thickness distributions of rectangular and tapered blades, but tends to increase the magnitude of the nonstructural segment weight distributions for both blade types.
Third order TRANSPORT with MAD (Methodical Accelerator Design) input
Carey, D.C.
1988-09-20
This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)
Outline of Methods for Design of Superconducting Turbogenerators,
1983-08-18
basic design magnitudes of superconducting gene- rators on the basis of a synthesis of the results of analyses of individual phenomena. Description of...to a machine with the design envisioned for this type of converter in the output range 0.3 to 3 GVA 161. Fig. 1 shows the basic elements of the...electromagnetic system and their mutual arrangement. The main purpose of using the algorithm is the 2 determination of basic geometric dimensions which
Development of panel methods for subsonic analysis and design
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1980-01-01
Two computer programs, developed for subsonic inviscid analysis and design are described. The first solves arbitrary mixed analysis design problems for multielement airfoils in two dimensional flow. The second calculates the pressure distribution for arbitrary lifting or nonlifting three dimensional configurations. In each program, inviscid flow is modelled by using distributed source doublet singularities on configuration surface panels. Numerical formulations and representative solutions are presented for the programs.
An On-Board Diagnosis Logic and Its Design Method
NASA Astrophysics Data System (ADS)
Hiratsuka, Satoshi; Fusaoka, Akira
In this paper, we propose a design methodology for on-board diagnosis engine of embedded systems. A boolean function for diagnosis circuit can be mechanically designed from the system dynamics given by the linear differential equation if it is observable, and also if the relation is given between the set of abnormal physical parameters and the faulty part. The size of diagnosis circuit is not so large that it can be implemented in FPGA or fabricated in a simple chip.
Aircraft design for mission performance using nonlinear multiobjective optimization methods
NASA Technical Reports Server (NTRS)
Dovi, Augustine R.; Wrenn, Gregory A.
1990-01-01
A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.
A Preliminary Rubric Design to Evaluate Mixed Methods Research
ERIC Educational Resources Information Center
Burrows, Timothy J.
2013-01-01
With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…
A Preliminary Rubric Design to Evaluate Mixed Methods Research
ERIC Educational Resources Information Center
Burrows, Timothy J.
2013-01-01
With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…
Flexible Backbone Methods for Predicting and Designing Peptide Specificity.
Ollikainen, Noah
2017-01-01
Protein-protein interactions play critical roles in essentially every cellular process. These interactions are often mediated by protein interaction domains that enable proteins to recognize their interaction partners, often by binding to short peptide motifs. For example, PDZ domains, which are among the most common protein interaction domains in the human proteome, recognize specific linear peptide sequences that are often at the C-terminus of other proteins. Determining the set of peptide sequences that a protein interaction domain binds, or it's "peptide specificity," is crucial for understanding its cellular function, and predicting how mutations impact peptide specificity is important for elucidating the mechanisms underlying human diseases. Moreover, engineering novel cellular functions for synthetic biology applications, such as the biosynthesis of biofuels or drugs, requires the design of protein interaction specificity to avoid crosstalk with native metabolic and signaling pathways. The ability to accurately predict and design protein-peptide interaction specificity is therefore critical for understanding and engineering biological function. One approach that has recently been employed toward accomplishing this goal is computational protein design. This chapter provides an overview of recent methodological advances in computational protein design and highlights examples of how these advances can enable increased accuracy in predicting and designing peptide specificity.
Multiple methods integration for structural mechanics analysis and design
NASA Technical Reports Server (NTRS)
Housner, J. M.; Aminpour, M. A.
1991-01-01
A new research area of multiple methods integration is proposed for joining diverse methods of structural mechanics analysis which interact with one another. Three categories of multiple methods are defined: those in which a physical interface are well defined; those in which a physical interface is not well-defined, but selected; and those in which the interface is a mathematical transformation. Two fundamental integration procedures are presented that can be extended to integrate various methods (e.g., finite elements, Rayleigh Ritz, Galerkin, and integral methods) with one another. Since the finite element method will likely be the major method to be integrated, its enhanced robustness under element distortion is also examined and a new robust shell element is demonstrated.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology
NASA Technical Reports Server (NTRS)
Morgan, Carolyn; DePriest, Douglas; Thompson, Richard (Technical Monitor)
2002-01-01
The cost and safety goals for NASA's next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to establish statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The research work was focused on establishing the suitable mathematical/statistical models for these purposes. It is anticipated that the resulting models can be incorporated into a software tool to provide rapid, variable-fidelity, aerothermal environments to predict heating along an arbitrary trajectory. This work will support development of an integrated design tool to perform automated thermal protection system (TPS) sizing and material selection.
A design method for an intuitive web site
Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.
1999-11-03
The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
A method for designing robust multivariable feedback systems
NASA Technical Reports Server (NTRS)
Milich, David Albert; Athans, Michael; Valavani, Lena; Stein, Gunter
1988-01-01
A new methodology is developed for the synthesis of linear, time-invariant (LTI) controllers for multivariable LTI systems. The aim is to achieve stability and performance robustness of the feedback system in the presence of multiple unstructured uncertainty blocks; i.e., to satisfy a frequency-domain inequality in terms of the structured singular value. The design technique is referred to as the Causality Recovery Methodology (CRM). Starting with an initial (nominally) stabilizing compensator, the CRM produces a closed-loop system whose performance-robustness is at least as good as, and hopefully superior to, that of the original design. The robustness improvement is obtained by solving an infinite-dimensional, convex optimization program. A finite-dimensional implementation of the CRM was developed, and it was applied to a multivariate design example.
Computational RNA secondary structure design: empirical complexity and improved methods
Aguirre-Hernández, Rosalía; Hoos, Holger H; Condon, Anne
2007-01-01
Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved. PMID:17266771
Advanced 3D inverse method for designing turbomachine blades
Dang, T.
1995-10-01
To meet the goal of 60% plant-cycle efficiency or better set in the ATS Program for baseload utility scale power generation, several critical technologies need to be developed. One such need is the improvement of component efficiencies. This work addresses the issue of improving the performance of turbo-machine components in gas turbines through the development of an advanced three-dimensional and viscous blade design system. This technology is needed to replace some elements in current design systems that are based on outdated technology.
[Drug design ideas and methods of Chinese herb prescriptions].
Ren, Jun-guo; Liu, Jian-xun
2015-09-01
The new drug of Chinese herbal prescription, which is the best carrier for the syndrome differentiation and treatment of Chinese medicine and is the main form of the new drug research and development, plays a very important role in the new drug research and development. Although there are many sources of the prescriptions, whether it can become a new drug, the necessity, rationality and science of the prescriptions are the key to develop the new drug. In this article, aiming at the key issues in prescriptions design, the source, classification, composition design of new drug of Chinese herbal prescriptions are discussed, and provide a useful reference for research and development of new drugs.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
An efficient multilevel optimization method for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.
1988-01-01
An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng, S.
2009-01-01
For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.
Convergence of controllers designed using state space methods
NASA Technical Reports Server (NTRS)
Morris, K. A.
1991-01-01
The convergence of finite dimensional controllers for infinite dimensional systems designed using approximations is examined. Stable coprime factorization theory is used to show that under the standard assumptions of uniform stabilizability/detectability, the controllers stabilize the original system for large enough model order. The controllers converge uniformly to an infinite dimensional controller, as does the closed loop response.
Active Learning Methods and Technology: Strategies for Design Education
ERIC Educational Resources Information Center
Coorey, Jillian
2016-01-01
The demands in higher education are on the rise. Charged with teaching more content, increased class sizes and engaging students, educators face numerous challenges. In design education, educators are often torn between the teaching of technology and the teaching of theory. Learning the formal concepts of hierarchy, contrast and space provide the…
A Prospective Method to Guide Small Molecule Drug Design
ERIC Educational Resources Information Center
Johnson, Alan T.
2015-01-01
At present, small molecule drug design follows a retrospective path when considering what analogs are to be made around a current hit or lead molecule with the focus often on identifying a compound with higher intrinsic potency. What this approach overlooks is the simultaneous need to also improve the physicochemical (PC) and pharmacokinetic (PK)…
A Prospective Method to Guide Small Molecule Drug Design
ERIC Educational Resources Information Center
Johnson, Alan T.
2015-01-01
At present, small molecule drug design follows a retrospective path when considering what analogs are to be made around a current hit or lead molecule with the focus often on identifying a compound with higher intrinsic potency. What this approach overlooks is the simultaneous need to also improve the physicochemical (PC) and pharmacokinetic (PK)…
Improved Methods for Classification, Prediction and Design of Antimicrobial Peptides
Wang, Guangshun
2015-01-01
Peptides with diverse amino acid sequences, structures and functions are essential players in biological systems. The construction of well-annotated databases not only facilitates effective information management, search and mining, but also lays the foundation for developing and testing new peptide algorithms and machines. The antimicrobial peptide database (APD) is an original construction in terms of both database design and peptide entries. The host defense antimicrobial peptides (AMPs) registered in the APD cover the five kingdoms (bacteria, protists, fungi, plants, and animals) or three domains of life (bacteria, archaea, and eukaryota). This comprehensive database (http://aps.unmc.edu/AP) provides useful information on peptide discovery timeline, nomenclature, classification, glossary, calculation tools, and statistics. The APD enables effective search, prediction, and design of peptides with antibacterial, antiviral, antifungal, antiparasitic, insecticidal, spermicidal, anticancer activities, chemotactic, immune modulation, or anti-oxidative properties. A universal classification scheme is proposed herein to unify innate immunity peptides from a variety of biological sources. As an improvement, the upgraded APD makes predictions based on the database-defined parameter space and provides a list of the sequences most similar to natural AMPs. In addition, the powerful pipeline design of the database search engine laid a solid basis for designing novel antimicrobials to combat resistant superbugs, viruses, fungi or parasites. This comprehensive AMP database is a useful tool for both research and education. PMID:25555720
Library Design Analysis Using Post-Occupancy Evaluation Methods.
ERIC Educational Resources Information Center
James, Dennis C.; Stewart, Sharon L.
1995-01-01
Presents findings of a user-based study of the interior of Rodger's Science and Engineering Library at the University of Alabama. Compared facility evaluations from faculty, library staff, and graduate and undergraduate students. Features evaluated include: acoustics, aesthetics, book stacks, design, finishes/materials, furniture, lighting,…
Active Learning Methods and Technology: Strategies for Design Education
ERIC Educational Resources Information Center
Coorey, Jillian
2016-01-01
The demands in higher education are on the rise. Charged with teaching more content, increased class sizes and engaging students, educators face numerous challenges. In design education, educators are often torn between the teaching of technology and the teaching of theory. Learning the formal concepts of hierarchy, contrast and space provide the…
Overview of control design methods for smart structural system
NASA Astrophysics Data System (ADS)
Rao, Vittal S.; Sana, Sridhar
2001-08-01
Smart structures are a result of effective integration of control system design and signal processing with the structural systems to maximally utilize the new advances in materials for structures, actuation and sensing to obtain the best performance for the application at hand. The research in smart structures is constantly driving towards attaining self adaptive and diagnostic capabilities that biological systems possess. This has been manifested in the number of successful applications in many areas of engineering such as aerospace, civil and automotive systems. Instrumental in the development of such systems are smart materials such as piezo-electric, shape memory alloys, electrostrictive, magnetostrictive and fiber-optic materials and various composite materials for use as actuators, sensors and structural members. The need for development of control systems that maximally utilize the smart actuators and sensing materials to design highly distributed and highly adaptable controllers has spurred research in the area of smart structural modeling, identification, actuator/sensor design and placement, control systems design such as adaptive and robust controllers with new tools such a neural networks, fuzzy logic, genetic algorithms, linear matrix inequalities and electronics for controller implementation such as analog electronics, micro controllers, digital signal processors (DSPs) and application specific integrated circuits (ASICs) such field programmable gate arrays (FPGAs) and Multichip modules (MCMs) etc. In this paper, we give a brief overview of the state of control in smart structures. Different aspects of the development of smart structures such as applications, technology and theoretical advances especially in the area of control systems design and implementation will be covered.
Optimal reliability design method for remote solar systems
NASA Astrophysics Data System (ADS)
Suwapaet, Nuchida
A unique optimal reliability design algorithm is developed for remote communication systems. The algorithm deals with either minimizing an unavailability of the system within a fixed cost or minimizing the cost of the system with an unavailability constraint. The unavailability of the system is a function of three possible failure occurrences: individual component breakdown, solar energy deficiency (loss of load probability), and satellite/radio transmission loss. The three mathematical models of component failure, solar power failure, transmission failure are combined and formulated as a nonlinear programming optimization problem with binary decision variables, such as number and type (or size) of photovoltaic modules, batteries, radios, antennas, and controllers. Three possible failures are identified and integrated in computer algorithm to generate the parameters for the optimization algorithm. The optimization algorithm is implemented with a branch-and-bound technique solution in MS Excel Solver. The algorithm is applied to a case study design for an actual system that will be set up in remote mountainous areas of Peru. The automated algorithm is verified with independent calculations. The optimal results from minimizing the unavailability of the system with the cost constraint case and minimizing the total cost of the system with the unavailability constraint case are consistent with each other. The tradeoff feature in the algorithm allows designers to observe results of 'what-if' scenarios of relaxing constraint bounds, thus obtaining the most benefit from the optimization process. An example of this approach applied to an existing communication system in the Andes shows dramatic improvement in reliability for little increase in cost. The algorithm is a real design tool, unlike other existing simulation design tools. The algorithm should be useful for other stochastic systems where component reliability, random supply and demand, and communication are
Designing a Science Methods Course for Early Childhood Preservice Teachers
ERIC Educational Resources Information Center
Akerson, Valarie L.
2004-01-01
Preparing early childhood (K-3) teachers to teach science presents special challenges for the science methods instructor. Early childhood preservice teachers typically come to the methods classroom with little science content knowledge; they also lack confidence in their own abilities to teach science. This paper presents a theoretical background,…
Applications of Genetic Methods to NASA Design and Operations Problems
NASA Technical Reports Server (NTRS)
Laird, Philip D.
1996-01-01
We review four recent NASA-funded applications in which evolutionary/genetic methods are important. In the process we survey: the kinds of problems being solved today with these methods; techniques and tools used; problems encountered; and areas where research is needed. The presentation slides are annotated briefly at the top of each page.
Design and ergonomics. Methods for integrating ergonomics at hand tool design stage.
Marsot, Jacques; Claudon, Laurent
2004-01-01
As a marked increase in the number of musculoskeletal disorders was noted in many industrialized countries and more specifically in companies that require the use of hand tools, the French National Research and Safety Institute (INRS) launched in 1999 a research project on the topic of integrating ergonomics into hand tool design, and more particularly to a design of a boning knife. After a brief recall of the difficulties of integrating ergonomics at the design stage, the present paper shows how 3 design methodological tools--Functional Analysis, Quality Function Deployment and TRIZ--have been applied to the design of a boning knife. Implementation of these tools enabled us to demonstrate the extent to which they are capable of responding to the difficulties of integrating ergonomics into product design.
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
The Hip Impact Protection Project: design and methods.
Barton, Bruce A; Birge, Stanley J; Magaziner, Jay; Zimmerman, Sheryl; Ball, Linda; Brown, Kathleen M; Kiel, Douglas P
2008-01-01
Nearly 340,000 hip fractures occur each year in the U.S. With current demographic trends, the number of hip fractures is expected to double at least in the next 40 years. The Hip Impact Protection Project (HIP PRO) was designed to investigate the efficacy and safety of hip protectors in an elderly nursing home population. This paper describes the innovative clustered matched-pair research design used in HIP PRO to overcome the inherent limitations of clustered randomization. Three clinical centers recruited 37 nursing homes to participate in HIP PRO. They were randomized so that the participating residents in that home received hip protectors for either the right or left hip. Informed consent was obtained from either the resident or the resident's responsible party. The target sample size was 580 residents with replacement if they dropped out, had a hip fracture, or died. One of the advantages of the HIP PRO study design was that each resident was his/her own case and control, eliminating imbalances, and there was no confusion over which residents wore pads (or on which hip). Generalizability of the findings may be limited. Adherence was higher in this study than in other studies because of: (1) the use of a run-in period, (2) staff incentives, and (3) the frequency of adherence assessments. The use of a single pad is not analogous to pad use in the real world and may have caused unanticipated changes in behavior. Fall assessment was not feasible, limiting the ability to analyze fractures as a function of falls. Finally, hip protector designs continue to evolve so that the results generated using this pad may not be applicable to other pad designs. However, information about factors related to adherence will be useful for future studies. The clustered matched-pair study design avoided the major problem with previous cluster-randomized investigations of this question - unbalanced risk factors between the experimental group and the control group. Because each
Study of Fuze Structure and Reliability Design Based on the Direct Search Method
NASA Astrophysics Data System (ADS)
Lin, Zhang; Ning, Wang
2017-03-01
Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.
Inverse airfoil design procedure using a multigrid Navier-Stokes method
NASA Technical Reports Server (NTRS)
Malone, J. B.; Swanson, R. C.
1991-01-01
The Modified Garabedian McFadden (MGM) design procedure was incorporated into an existing 2-D multigrid Navier-Stokes airfoil analysis method. The resulting design method is an iterative procedure based on a residual correction algorithm and permits the automated design of airfoil sections with prescribed surface pressure distributions. The new design method, Multigrid Modified Garabedian McFadden (MG-MGM), is demonstrated for several different transonic pressure distributions obtained from both symmetric and cambered airfoil shapes. The airfoil profiles generated with the MG-MGM code are compared to the original configurations to assess the capabilities of the inverse design method.
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
Recent advances in the SMS design method: 3D aplanatism and diffraction
NASA Astrophysics Data System (ADS)
Miñano, Juan C.; Benitez, P.; Narasimhan, B.; Nikolic, M.; Mendes-Lopes, J.; Grabovickic, D.
2016-09-01
Recent advances in the Simultaneous Multiple Surfaces (SMS) design method are reviewed in this paper. In particular, we review the design of diffractive surfaces using the SMS method and the concept of freeform aplanatism as a limit case of a 3D SMS design.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.; Jordan, T. A.; Soltesz, R. G.; Woodsum, H. C.
1969-01-01
Eight computer programs make up a nine volume synthesis containing two design methods for nuclear rocket radiation shields. The first design method is appropriate for parametric and preliminary studies, while the second accomplishes the verification of a final nuclear rocket reactor design.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
New Methods for Design and Computation of Freeform Optics
2015-07-09
as a partial differential equation(PDE) of second order with nonstandard boundary conditions. The solution to this PDE problem is a scalar function...the exact solution with any a priori given accuracy. By contrast with other approaches the solution obtained with our approach does not depend on ad hoc...strategy for constructing weak solutions to nonlinear partial differential equations arising in design problems involving freeform optical surfaces[10
Nanobiological studies on drug design using molecular mechanic method.
Ghaheh, Hooria Seyedhosseini; Mousavi, Maryam; Araghi, Mahmood; Rasoolzadeh, Reza; Hosseini, Zahra
2015-01-01
Influenza H1N1 is very important worldwide and point mutations that occur in the virus gene are a threat for the World Health Organization (WHO) and druggists, since they could make this virus resistant to the existing antibiotics. Influenza epidemics cause severe respiratory illness in 30 to 50 million people and kill 250,000 to 500,000 people worldwide every year. Nowadays, drug design is not done through trial and error because of its cost and waste of time; therefore bioinformatics studies is essential for designing drugs. This paper, infolds a study on binding site of Neuraminidase (NA) enzyme, (that is very important in drug design) in 310K temperature and different dielectrics, for the best drug design. Information of NA enzyme was extracted from Protein Data Bank (PDB) and National Center for Biotechnology Information (NCBI) websites. The new sequences of N1 were downloaded from the NCBI influenza virus sequence database. Drug binding sites were assimilated and homologized modeling using Argus lab 4.0, HyperChem 6.0 and Chem. D3 softwares. Their stability was assessed in different dielectrics and temperatures. Measurements of potential energy (Kcal/mol) of binding sites of NA in different dielectrics and 310K temperature revealed that at time step size = 0 pSec drug binding sites have maximum energy level and at time step size = 100 pSec have maximum stability and minimum energy. Drug binding sites are more dependent on dielectric constants rather than on temperature and the optimum dielectric constant is 39/78.
Design Method of Fault Detector for Injection Unit
NASA Astrophysics Data System (ADS)
Ochi, Kiyoshi; Saeki, Masami
An injection unit is considered as a speed control system utilizing a reaction-force sensor. Our purpose is to design a fault detector that detects and isolates actuator and sensor faults under the condition that the system is disturbed by a reaction force. First described is the fault detector's general structure. In this system, a disturbance observer that estimates the reaction force is designed for the speed control system in order to obtain the residual signals, and then post-filters that separate the specific frequency elements from the residual signals are applied in order to generate the decision signals. Next, we describe a fault detector designed specifically for a model of the injection unit. It is shown that the disturbance imposed on the decision variables can be made significantly small by appropriate adjustments to the observer bandwidth, and that most of the sensor faults and actuator faults can be detected and some of them can be isolated in the frequency domain by setting the frequency characteristics of the post-filters appropriately. Our result is verified by experiments for an actual injection unit.
ERIC Educational Resources Information Center
Nogry, S.; Jean-Daubias, S.; Guin, N.
2012-01-01
This article deals with evaluating an interactive learning environment (ILE) during the iterative-design process. Various aspects of the system must be assessed and a number of evaluation methods are available. In designing the ILE Ambre-add, several techniques were combined to test and refine the system. In particular, we point out the merits of…
Grouping design method of catadioptric projection objective for deep ultraviolet lithography
NASA Astrophysics Data System (ADS)
Cao, Zhen; Li, Yanqiu; Mao, Shanshan
2017-02-01
Choosing an adequate initial design for optimization plays an important role in obtaining high-quality deep ultraviolet (DUV) lithographic objectives. In this paper, the grouping design method is extended to acquire initial configurations of catadioptric projection objective for DUV lithography. In this method, an objective system is first divided into several lens groups. The initial configuration of each lens group is then determined by adjusting and optimizing existing lens design according to respective design requirements. Finally, the lens groups are connected into a feasible initial objective system. Grouping design allocates the complexity of designing a whole system to each of the lens groups, which significantly simplifies the design process. A two-mirror design form serves as an example for illustrating the grouping design principles to this type of system. In addition, it is demonstrated that different initial designs can be generated by changing the design form of each individual lens group.
Integer programming methods for reserve selection and design
Robert G. Haight; Stephanie A. Snyder
2009-01-01
How many nature reserves should there be? Where should they be located? Which places have highest priority for protection? Conservation biologists, economists, and operations researchers have been developing quantitative methods to address these questions since the 1980s.
Advanced Control and Protection system Design Methods for Modular HTGRs
Ball, Sydney J; Wilson Jr, Thomas L; Wood, Richard Thomas
2012-06-01
The project supported the Nuclear Regulatory Commission (NRC) in identifying and evaluating the regulatory implications concerning the control and protection systems proposed for use in the Department of Energy's (DOE) Next-Generation Nuclear Plant (NGNP). The NGNP, using modular high-temperature gas-cooled reactor (HTGR) technology, is to provide commercial industries with electricity and high-temperature process heat for industrial processes such as hydrogen production. Process heat temperatures range from 700 to 950 C, and for the upper range of these operation temperatures, the modular HTGR is sometimes referred to as the Very High Temperature Reactor or VHTR. Initial NGNP designs are for operation in the lower temperature range. The defining safety characteristic of the modular HTGR is that its primary defense against serious accidents is to be achieved through its inherent properties of the fuel and core. Because of its strong negative temperature coefficient of reactivity and the capability of the fuel to withstand high temperatures, fast-acting active safety systems or prompt operator actions should not be required to prevent significant fuel failure and fission product release. The plant is designed such that its inherent features should provide adequate protection despite operational errors or equipment failure. Figure 1 shows an example modular HTGR layout (prismatic core version), where its inlet coolant enters the reactor vessel at the bottom, traversing up the sides to the top plenum, down-flow through an annular core, and exiting from the lower plenum (hot duct). This research provided NRC staff with (a) insights and knowledge about the control and protection systems for the NGNP and VHTR, (b) information on the technologies/approaches under consideration for use in the reactor and process heat applications, (c) guidelines for the design of highly integrated control rooms, (d) consideration for modeling of control and protection system designs for
Object-oriented design of preconditioned iterative methods
Bruaset, A.M.
1994-12-31
In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
[Principles in the design of surface temperature measurement method via radiation approach].
Cheng, Xiao-fang; Wang, An-quan; Fu, Tai-ran
2003-04-01
The fundamental issues in the design of surface temperature measurement method via radiation approach are discussed in this article, such as the spectral and directional complicacy of thermal radiation, the optical design, and the electrocircuit design. A generalized equation for the analysis of temperature measurement method via radiation approach is proposed. The method of absolute value and relative value is compared. Calibration should be made in the absolute value method but not in the relative value method. The former method is not suitable for the measurement of temperature fields while the latter method is.
Photovoltaic module hot spot durability design and test methods
NASA Technical Reports Server (NTRS)
Arnett, J. C.; Gonzalez, C. C.
1981-01-01
As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, the susceptibility of fat-plate modules to hot-spot problems is investigated. Hot-spot problems arise in modules when the cells become back-biased and operate in the negative-voltage quadrant, as a result of short-circuit current mismatch, cell cracking or shadowing. The details of a qualification test for determining the capability of modules of surviving field hot-spot problems and typical results of this test are presented. In addition, recommended circuit-design techniques for improving the module and array reliability with respect to hot-spot problems are presented.
Genetic-evolution-based optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
Evaluation of cabin design based on the method of multiple attribute group decision-making
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Lv, Linlin; Li, Ping
2013-07-01
New century, cabin design has become an important factor affecting the compact capability of modern naval vessels. Traditional cabin design, based on naval rules and designer's subjective feeling and experience, holds that weapons and equipments are more important than habitability. So crew's satisfaction is not high to ships designed by traditional methods. In order to solve this problem, the method of multiple attribute group decision-making was proposed to evaluate the cabin design projects. This method considered many factors affecting cabin design, established a target system, quantified fuzzy factors in cabin design, analyzed the need of crews and gave a reasonable evaluation on cabin design projects. Finally, an illustrative example analysis validates the effectiveness and reliability of this method.
A New Comparison of Nested Case-Control and Case-Cohort Designs and Methods
Kim, Ryung S.
2014-01-01
Existing literature comparing statistical properties of nested case-control and case-cohort methods have become insufficient for present day epidemiologists. The literature has not reconciled conflicting conclusions about the standard methods. Moreover, a comparison including newly developed methods, such as inverse probability weighting methods, is needed. Two analytical methods for nested case-control studies and six methods for case-cohort studies using proportional hazards regression model were summarized and their statistical properties were compared. The answer to which design and method is more powerful was more nuanced than what was previously reported. For both nested case-control and case-cohort designs, inverse probability weighting methods were more powerful than the standard methods. However, the difference became negligible when the proportion of failure events was very low (<1%) in the full cohort. The comparison between two designs depended on the censoring types and incidence proportion: with random censoring, nested case-control designs coupled with the inverse probability weighting method yielded the highest statistical power among all methods for both designs. With fixed censoring times, there was little difference in efficiency between two designs when inverse probability weighting methods were used; however, the standard case-cohort methods were more powerful than the conditional logistic method for nested case-control designs. As the proportion of failure events in the full cohort became smaller (<10%), nested case-control methods outperformed all case-cohort methods and the choice of analytic methods within each design became less important. When the predictor of interest was binary, the standard case-cohort methods were often more powerful than the conditional logistic method for nested case-control designs. PMID:25446306
A design method for minimizing sensitivity to plant parameter variations
NASA Technical Reports Server (NTRS)
Hadass, Z.; Powell, J. D.
1974-01-01
A method is described for minimizing the sensitivity of multivariable systems to parameter variations. The variable parameters are considered as random variables and their effect is included in a quadratic performance index. The performance index is a weighted sum of the state and control covariances that stem from both the random system disturbances and the parameter uncertainties. The numerical solution of the problem is described and application of the method to several initially sensitive tracking systems is discussed. The sensitivity factor of reduction was typically 2 or 3 over a system based on random system noise only, and yet resulted in state RMS increases of only about a factor of two.
Cathodic protection design using the regression and correlation method
Niembro, A.M.; Ortiz, E.L.G.
1997-09-01
A computerized statistical method which calculates the current demand requirement based on potential measurements for cathodic protection systems is introduced. The method uses the regression and correlation analysis of statistical measurements of current and potentials of the piping network. This approach involves four steps: field potential measurements, statistical determination of the current required to achieve full protection, installation of more cathodic protection capacity with distributed anodes around the plant and examination of the protection potentials. The procedure is described and recommendations for the improvement of the existing and new cathodic protection systems are given.
NASA Astrophysics Data System (ADS)
Cui, Jin-ju; Wang, De-yu; Shi, Qi-qi
2015-06-01
Knowledge-Based Engineering (KBE) is introduced into the ship structural design in this paper. From the implementation of KBE, the design solutions for both Rules Design Method (RDM) and Interpolation Design Method (IDM) are generated. The corresponding Finite Element (FE) models are generated. Topological design of the longitudinal structures is studied where the Gaussian Process (GP) is employed to build the surrogate model for FE analysis. Multi-objective optimization methods inspired by Pareto Front are used to reduce the design tank weight and outer surface area simultaneously. Additionally, an enhanced Level Set Method (LSM) which employs implicit algorithm is applied to the topological design of typical bracket plate which is used extensively in ship structures. Two different sets of boundary conditions are considered. The proposed methods show satisfactory efficiency and accuracy.
Computer control of large accelerators, design concepts and methods
NASA Astrophysics Data System (ADS)
Beck, F.; Gormley, M.
1985-03-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. This presentation is an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies, and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented, since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided.
Computer control of large accelerators design concepts and methods
Beck, F.; Gormley, M.
1984-05-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.
Flight critical system design guidelines and validation methods
NASA Technical Reports Server (NTRS)
Holt, H. M.; Lupton, A. O.; Holden, D. G.
1984-01-01
Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.
Citalopram for agitation in Alzheimer's disease: design and methods.
Drye, Lea T; Ismail, Zahinoor; Porsteinsson, Anton P; Rosenberg, Paul B; Weintraub, Daniel; Marano, Christopher; Pelton, Gregory; Frangakis, Constantine; Rabins, Peter V; Munro, Cynthia A; Meinert, Curtis L; Devanand, D P; Yesavage, Jerome; Mintzer, Jacobo E; Schneider, Lon S; Pollock, Bruce G; Lyketsos, Constantine G
2012-01-01
Agitation is one of the most common neuropsychiatric symptoms of Alzheimer's disease (AD), and is associated with serious adverse consequences for patients and caregivers. Evidence-supported treatment options for agitation are limited. The citalopram for agitation in Alzheimer's disease (CitAD) study was designed to evaluate the potential of citalopram to ameliorate these symptoms. CitAD is a randomized, double-masked, placebo-controlled multicenter clinical trial, with two parallel treatment groups assigned in a 1:1 ratio and randomization stratified by clinical center. The study included eight recruiting clinical centers, a chair's office, and a coordinating center located in university settings in the United States and Canada. A total of 200 individuals having probable AD with clinically significant agitation and without major depression were recruited for this study. Patients were randomized to receive citalopram (target dose of 30 mg/d) or matching placebo. Caregivers of patients in both treatment groups received a structured psychosocial therapy. Agitation was compared between treatment groups using the NeuroBehavioral Rating Scale and the AD Cooperative Study- Clinical Global Impression of Change, which are the primary outcomes. Functional performance, cognition, caregiver distress, and rates of adverse and serious adverse events were also measured. The authors believe the design elements in CitAD are important features to be included in trials assessing the safety and efficacy of psychotropic medications for clinically significant agitation in AD. Copyright Â© 2012 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Flight critical system design guidelines and validation methods
NASA Technical Reports Server (NTRS)
Holt, H. M.; Lupton, A. O.; Holden, D. G.
1984-01-01
Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.
Heuristic urban transportation network design method, a multilayer coevolution approach
NASA Astrophysics Data System (ADS)
Ding, Rui; Ujang, Norsidah; Hamid, Hussain bin; Manan, Mohd Shahrudin Abd; Li, Rong; Wu, Jianjun
2017-08-01
The design of urban transportation networks plays a key role in the urban planning process, and the coevolution of urban networks has recently garnered significant attention in literature. However, most of these recent articles are based on networks that are essentially planar. In this research, we propose a heuristic multilayer urban network coevolution model with lower layer network and upper layer network that are associated with growth and stimulate one another. We first use the relative neighbourhood graph and the Gabriel graph to simulate the structure of rail and road networks, respectively. With simulation we find that when a specific number of nodes are added, the total travel cost ratio between an expanded network and the initial lower layer network has the lowest value. The cooperation strength Λ and the changeable parameter average operation speed ratio Θ show that transit users' route choices change dramatically through the coevolution process and that their decisions, in turn, affect the multilayer network structure. We also note that the simulated relation between the Gini coefficient of the betweenness centrality, Θ and Λ have an optimal point for network design. This research could inspire the analysis of urban network topology features and the assessment of urban growth trends.
Category's analysis and operational project capacity method of transformation in design
NASA Astrophysics Data System (ADS)
Obednina, S. V.; Bystrova, T. Y.
2015-10-01
The method of transformation is attracting widespread interest in fields such contemporary design. However, in theory of design little attention has been paid to a categorical status of the term "transformation". This paper presents the conceptual analysis of transformation based on the theory of form employed in the influential essays by Aristotle and Thomas Aquinas. In the present work the transformation as a method of shaping design has been explored as well as potential application of this term in design has been demonstrated.
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues
ERIC Educational Resources Information Center
Lieber, Eli
2009-01-01
This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…
Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues
ERIC Educational Resources Information Center
Lieber, Eli
2009-01-01
This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…
Experimental Evaluation of Design Methods for Hardened Piping Systems.
prediction capabilities of present day computer methods. The basic pipe elements tested included straight pipes, area changes, elbows , valves, a pump, and...surge tanks. The piping system tested was a closed loop system which contained the following elements: elbows , straight pipes, valves, a pump, and an
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Application of Six Sigma Method to EMS Design
NASA Astrophysics Data System (ADS)
Rusko, Miroslav; Králiková, Ružena
2011-01-01
The Six Sigma method is a complex and flexible system of achieving, maintaining and maximizing the business success. Six Sigma is based mainly on understanding the customer needs and expectation, disciplined use of facts and statistics analysis, and responsible approach to managing, improving and establishing new business, manufacturing and service processes.
Stochastic Methods in Protective Structure Design: An Integrated Approach
1988-09-01
189a. Histogram and Probability for Monte Carlo Method .................................... 2 1 9a. Histogram and Probability for Response...Monte Carlo Simulation for ACI Shear and Shear Response .............. 26 1 lb. Static Monte Carlo Simulation for Direct Shear and Shear Response...problem to a wave-propagation, breaching, or penetration problem. A simple Monte- Carlo simulation of the range versus pressure function would
The Use of Hermeneutics in a Mixed Methods Design
ERIC Educational Resources Information Center
von Zweck, Claudia; Paterson, Margo; Pentland, Wendy
2008-01-01
Combining methods in a single study is becoming a more common practice because of the limitations of using only one approach to fully address all aspects of a research question. Hermeneutics in this paper is discussed in relation to a large national study that investigated issues influencing the ability of international graduates to work as…
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Iterative method of baffle design for modified Ritchey-Chretien telescope.
Senthil Kumar, M; Narayanamurthy, C S; Kiran Kumar, A S
2013-02-20
We developed a baffle design method based on a combination of the results of optical design software and analytical relations formulated herein. The method finds the exact solution for baffle parameters of a modified Ritchey-Chretien telescope by iteratively solving the analytical relations using the actual ray coordinates of the telescope computed with the aid of optical design software. The baffle system so designed not only blocks the direct rays of stray light reaching the image plane but also provides minimum obscuration to imaging light. Based on the iterative method, we proposed a baffle design approach for a rectangular-image-format telescope.
International children's accelerometry database (ICAD): Design and methods
2011-01-01
Background Over the past decade, accelerometers have increased in popularity as an objective measure of physical activity in free-living individuals. Evidence suggests that objective measures, rather than subjective tools such as questionnaires, are more likely to detect associations between physical activity and health in children. To date, a number of studies of children and adolescents across diverse cultures around the globe have collected accelerometer measures of physical activity accompanied by a broad range of predictor variables and associated health outcomes. The International Children's Accelerometry Database (ICAD) project pooled and reduced raw accelerometer data using standardized methods to create comparable outcome variables across studies. Such data pooling has the potential to improve our knowledge regarding the strength of relationships between physical activity and health. This manuscript describes the contributing studies, outlines the standardized methods used to process the accelerometer data and provides the initial questions which will be addressed using this novel data repository. Methods Between September 2008 and May 2010 46,131 raw Actigraph data files and accompanying anthropometric, demographic and health data collected on children (aged 3-18 years) were obtained from 20 studies worldwide and data was reduced using standardized analytical methods. Results When using ≥ 8, ≥ 10 and ≥ 12 hrs of wear per day as a criterion, 96%, 93.5% and 86.2% of the males, respectively, and 96.3%, 93.7% and 86% of the females, respectively, had at least one valid day of data. Conclusions Pooling raw accelerometer data and accompanying phenotypic data from a number of studies has the potential to: a) increase statistical power due to a large sample size, b) create a more heterogeneous and potentially more representative sample, c) standardize and optimize the analytical methods used in the generation of outcome variables, and d) provide a means to
AI/OR computational model for integrating qualitative and quantitative design methods
NASA Technical Reports Server (NTRS)
Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor
1990-01-01
A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.
AI/OR computational model for integrating qualitative and quantitative design methods
NASA Technical Reports Server (NTRS)
Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor
1990-01-01
A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.
The research progress on Hodograph Method of aerodynamic design at Tsinghua University
NASA Technical Reports Server (NTRS)
Chen, Zuoyi; Guo, Jingrong
1991-01-01
Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.
Matching wind turbine rotors and loads: Computational methods for designers
NASA Astrophysics Data System (ADS)
Seale, J. B.
1983-04-01
A comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications was reported. A method was developed to convert the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) it is decided how turbine power is to be governed to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics are used to predict longterm energy output. Most systems are approximated by a graph and calculator approach. The method leads to energy predictions, and to insight into modeled processes. A computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out with in depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps; including three different load compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Design of synthetic soil images using the Truncated Multifractal method
NASA Astrophysics Data System (ADS)
Sotoca, Juan J. Martin; Saa-Requejo, Antonio; López de Herrera, Juan; Grau, Juan B.
2017-04-01
The use of synthetic images in soils is an increasingly used resource when comparing different segmentation methods. This type of images can simulate features of the real soil images. We can find examples of 2D and 3D synthetic soil images in the studies by Zhang (2001), Schlüter et al. (2010) and Wang et al. (2011). The aim of this presentation is to show an improved version of the Truncated Multifractal method (TMM) which was initially introduced by Martín-Sotoca et al. (2016a, 2016b). The TMM is able to construct a 3D synthetic soil image that is composed of a known air-filled pore space and a background space, which includes, as a novelty, a pebble space. The pebble space simulates the pebbles or granules of high intensity that typically appear in computed tomography (CT) soil images. The TMM can simulate the two main characteristics of the CT soil images: the scaling nature of the pore space and the low contrast at the solid/pore interface with non-bimodal greyscale value histograms. In this presentation we introduce some new components which improve the similitude between real and synthetic CT soil images. REFERENCES Martín-Sotoca, J.J., Saa-Requejo, A., Grau, J.B. and Tarquis, A.M. (2016a). New segmentation method based on fractal properties using singularity maps. Geoderma, doi: 10.1016/j.geoderma.2016.09.005 Martín-Sotoca, J.J., Saa-Requejo, A., Grau, J.B., Tarquis, A.M. (2016b). Local 3D segmentation of soil pore space based on fractal properties using singularity maps. Geoderma, doi: 10.1016/j.geoderma.2016.11.029 Schlüter, S., Weller, U., Vogel, H.J., (2010). Thresholding of X-ray microtomography images of soil using gradient masks. Comput. Geosci. 36, 1246-1251 Wang, W., Kravchenko, A.N., Smucker, A.J.M., Rivers, M.L. (2011). Comparison of image segmentation methods in simulated 2D and 3D microtomographic images of soil aggregates. Geoderma, 162, 231-241 Zhang, Y.J. (2001). A review of recent evaluation methods for image segmentation
Bumpus, S.E.; Johnson, J.J.; Smith, P.D.
1980-05-01
The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the traditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC) - seismic input, soil-structure interaction, major structural response, and subsystem response - are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on a model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evaluation Method is also demonstrated.
Matching wind turbine rotors and loads: computational methods for designers
Seale, J.B.
1983-04-01
This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.
Study of Optimum Insulating Design Method of Asymmetrical Structure GCB
NASA Astrophysics Data System (ADS)
Tanigaki, Shuichi; Yoshioka, Yoshio
An asymmetrical structure GCB that employed resister closing method equips an interrupter and a closing resister contact in one shield, and so the shield diameter becomes large. In this study, we investigated the optimum tank diameter and major radius of shield electrode of asymmetrical structure GCB by three-dimensional electric field calculation. We also investigated these optimum structures of symmetrical structure GCB and compared the result with that of the asymmetrical structure GCB. In conclusion, the tank diameter of asymmetrical structure GCB becomes larger than the tank diameter of symmetrical structure GCB by 24%.
Engineering and Design: Geotechnical Analysis by the Finite Element Method
2007-11-02
used it to determine stresses and movements in embank- ments, and Reyes and Deer described its application to analysis of underground openings in rock...36 Hughes, T. J. R. (1987). The Finite Element Reyes , S. F., and Deene, D. K. (1966). “Elastic Method, Linear Static and Dynamic Finite Element...SM4), 1,435-1,457. Fernando Dams During the Earthquakes of February Davis, E. H., and Poulos, H. G. (1972). “Rate of Report EERC-73-2, Berkeley, CA
Defining Requirements and Related Methods for Designing Sensorized Garments.
Andreoni, Giuseppe; Standoli, Carlo Emilio; Perego, Paolo
2016-05-26
Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user's age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors-also influencing user comfort-are elasticity and washability, while more technical properties are the stability of the chemical agents' effects for preserving the sensors' efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability.
Defining Requirements and Related Methods for Designing Sensorized Garments
Andreoni, Giuseppe; Standoli, Carlo Emilio; Perego, Paolo
2016-01-01
Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user’s age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors—also influencing user comfort—are elasticity and washability, while more technical properties are the stability of the chemical agents’ effects for preserving the sensors’ efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability. PMID:27240361
Constructal method to optimize solar thermochemical reactor design
Tescari, S.; Mazet, N.; Neveu, P.
2010-09-15
The objective of this study is the geometrical optimization of a thermochemical reactor, which works simultaneously as solar collector and reactor. The heat (concentrated solar radiation) is supplied on a small peripheral surface and has to be dispersed in the entire reactive volume in order to activate the reaction all over the material. A similarity between this study and the point to volume problem analyzed by the constructal approach (Bejan, 2000) is evident. This approach was successfully applied to several domains, for example for the coupled mass and conductive heat transfer (Azoumah et al., 2004). Focusing on solar reactors, this work aims to apply constructal analysis to coupled conductive and radiative heat transfer. As a first step, the chemical reaction is represented by a uniform heat sink inside the material. The objective is to optimize the reactor geometry in order to maximize its efficiency. By using some hypothesis, a simplified solution is found. A parametric study provides the influence of different technical and operating parameters on the maximal efficiency and on the optimal shape. Different reactor designs (filled cylinder, cavity and honeycomb reactors) are compared, in order to determine the most efficient structure according to the operating conditions. Finally, these results are compared with a CFD model in order to validate the assumptions. (author)
The ZInEP Epidemiology Survey: background, design and methods.
Ajdacic-Gross, Vladeta; Müller, Mario; Rodgers, Stephanie; Warnke, Inge; Hengartner, Michael P; Landolt, Karin; Hagenmuller, Florence; Meier, Magali; Tse, Lee-Ting; Aleksandrowicz, Aleksandra; Passardi, Marco; Knöpfli, Daniel; Schönfelder, Herdis; Eisele, Jochen; Rüsch, Nicolas; Haker, Helene; Kawohl, Wolfram; Rössler, Wulf
2014-12-01
This article introduces the design, sampling, field procedures and instruments used in the ZInEP Epidemiology Survey. This survey is one of six ZInEP projects (Zürcher Impulsprogramm zur nachhaltigen Entwicklung der Psychiatrie, i.e. the "Zurich Program for Sustainable Development of Mental Health Services"). It parallels the longitudinal Zurich Study with a sample comparable in age and gender, and with similar methodology, including identical instruments. Thus, it is aimed at assessing the change of prevalence rates of common mental disorders and the use of professional help and psychiatric sevices. Moreover, the current survey widens the spectrum of topics by including sociopsychiatric questionnaires on stigma, stress related biological measures such as load and cortisol levels, electroencephalographic (EEG) and near-infrared spectroscopy (NIRS) examinations with various paradigms, and sociophysiological tests. The structure of the ZInEP Epidemiology Survey entails four subprojects: a short telephone screening using the SCL-27 (n of nearly 10,000), a comprehensive face-to-face interview based on the SPIKE (Structured Psychopathological Interview and Rating of the Social Consequences for Epidemiology: the main instrument of the Zurich Study) with a stratified sample (n = 1500), tests in the Center for Neurophysiology and Sociophysiology (n = 227), and a prospective study with up to three follow-up interviews and further measures (n = 157). In sum, the four subprojects of the ZInEP Epidemiology Survey deliver a large interdisciplinary database. Copyright © 2014 John Wiley & Sons, Ltd.
Design methods of multilayer survivability in IP over WDM networks
NASA Astrophysics Data System (ADS)
Arakawa, Shin'ichi; Murata, Masayuki; Miyahara, Hideo
2000-09-01
IP (Internet Protocol) over WDM networks where IP packets are directly carried on the WDM network is expected to offer an infrastructure for the next generation Internet. For IP over WDM networks, a WDM protection mechanism is expected to provide a highly reliable network (i.e., robustness against the link/node failures). However, conventional IP also provides a reliability mechanism by its routing function. We thus need to treat functional partitioning or functional integration for IP over WDM networks with high reliability. In this paper, we first formulate an optimization problem for designing IP over WDM networks with protection functionalities of WDM networks, by which we can obtain IP over WDM networks with high reliability. Our formulation results in a mixed integer linear problem (MILP). However, it is known that MILP can be solved only for a small number of variables, in our case, nodes and/or wavelengths. We therefore propose two heuristic algorithms, min-hop-first and largest-traffic-first approaches in order to assign the wavelength for backup lightpath. Our results show that the min- hop-first approach takes fewer wavelengths to construct the reliable network, that is, all of lightpaths can be protected using the WDM protection mechanism. However, our largest-traffic- first approach is also a good choice in the sense that the approach can be saved the traffic volume increased at the IP router by the link failure.
Simplified tornado depressurization design methods for nuclear power plants
Howard, N.M.; Krasnopoler, M.I.
1983-05-01
A simplified approach for the calculation of tornado depressurization effects on nuclear power plant structures and components is based on a generic computer depressurization analysis for an arbitrary single volume V connected to the atmosphere by an effective vent area A. For a given tornado depressurization transient, the maximum depressurization ..delta..P of the volume was found to depend on the parameter V/A. The relation between ..delta..P and V/A can be represented by a single monotonically increasing curve for each of the three design-basis tornadoes described in the U.S. Nuclear Regulatory Commission's Regulatory Guide 1.76. These curves can be applied to most multiple-volume nuclear power plant structures by considering each volume and its controlling vent area. Where several possible flow areas could be controlling, the maximum value of V/A can be used to estimate a conservative value for ..delta..P. This simplified approach was shown to yield reasonably conservative results when compared to detailed computer calculations of moderately complex geometries. Treatment of severely complicated geometries, heating and ventilation systems, and multiple blowout panel arrangements were found to be beyond the limitations of the simplified analysis.
Engineering design method for cavitational reactors: I. Sonochemical reactors
Gogate, P.R.; Pandit, A.B.
2000-02-01
High pressures and temperatures generated during the cavitation process are now considered responsible for the observed physical and chemical transformations using ultrasound irradiation. Effects of various operating parameters reported here include the frequency, the intensity of ultrasound, and the initial nuclei sizes on the bubble dynamics, and hence the magnitude of pressure generated. Rigorous solutions of the Rayleigh-Plesset equation require considerable numerical skills and the results obtained depend on various assumptions. The Rayleigh-Plesset equation was solved numerically, and the results have been empirically correlated using easily measurable global parameters in a sonochemical reactor. Liquid-phase compressibility effects were also considered. These considerations resulted in a criterion for critical ultrasound intensity, which if not considered properly can lead to overdesign or underdesign. A sound heuristic correlation, developed for the prediction of the pressure pulse generated as a function of initial nuclei sizes, frequency, and intensity of ultrasound, is valid not only over the entire range of operating parameters commonly used but also in the design procedure of sonochemical reactors with great confidence.
A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits
NASA Astrophysics Data System (ADS)
Moradi, Behzad; Mirzaei, Abdolreza
2016-11-01
A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.
GenStar: A method for de novo drug design
NASA Astrophysics Data System (ADS)
Rotstein, Sergio H.; Murcko, Mark A.
1993-02-01
A novel method, which we call GenStar, has been developed to suggest chemically reasonable structures which fill the active sites of enzymes. The proposed molecules provide good steric contact with the enzyme and exist in low-energy conformations. These structures are composed entirely of sp3 carbons which are grown sequentially, but which can also branch or form rings. User-selected enzyme seed atoms may be used to determine the area in which structure generation begins. Alternatively, GenStar may begin with a predocked `inhibitor core' from which atoms are grown. For each new atom generated by the program, several hundred candidate positions representing a range of reasonable bond lengths, bond angles, and torsion angles are considered. Each of these candidates is scored, based on a simple enzyme contact model. The selected position is chosen at random from among the highest scoring cases. Duplicate structures may be removed using a variety of criteria. The compounds may be energy minimized and displayed using standard modeling programs. Also, it is possible to analyze the collection of all structures created by GenStar and locate binding motifs for common fragments such as benzene and naphthylene. Tests of the method using HIV protease, FK506 binding protein (FKBP-12) and human carbonic anhydrase (HCA-II) demonstrated that structures similar to known potent inhibitors may be generated with GenStar.
The Brazilian Amazon Region Eye Survey: Design and Methods.
Salomão, Solange R; Furtado, João Marcello; Berezovsky, Adriana; Cavascan, Nívea N; Ferraz, Alberto N; Cohen, Jacob M; Muñoz, Sergio; Belfort, Rubens
2017-08-01
To describe the study design, operational strategies, procedures, and baseline characteristics of the Brazilian Amazon Region Eye Survey (BARES), a population-based survey of the prevalence and causes of distance and near visual impairment and blindness in older adults residing in the city of Parintins. Cluster sampling, based on geographically defined census sectors, was used for cross-sectional random sampling of persons 45 years and older from urban and rural areas. Subjects were enumerated through a door-to-door survey and invited for measurement of uncorrected, presenting and best-corrected visual acuity and an ocular examination. Of 9931 residents (5878 urban and 4053 rural), 2384 individuals (1410 urban and 974 rural) were eligible and 2041 (1180 urban and 861 rural) had a clinical examination (response rate 85.6%). The majority of participants were female (1041, 51.0%); the average age was 59.9 ± 11.1 years (60.2 ± 11.2 years for urban and 59.4 ± 11.1 years for rural); 1360 (66.6%) had primary schooling or less (58.1% in urban and 78.4% in rural) and 57.8% were resident in urban areas. The age distribution between sexes was similar (p = 0.178). Both sex and age distributions of the sample were comparable to that of the Brazilian Amazon Region population. The BARES cohort will provide information about the prevalence and causes of near and distance vision in this underprivileged and remote population in Brazil.